Test Report: Docker_Linux_crio 21895

                    
                      382ea0a147905a9644676f66ab1ed2cbc8737b3b:2025-11-15:42335
                    
                

Test fail (39/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.26
35 TestAddons/parallel/Registry 15.55
36 TestAddons/parallel/RegistryCreds 0.43
37 TestAddons/parallel/Ingress 148.78
38 TestAddons/parallel/InspektorGadget 5.31
39 TestAddons/parallel/MetricsServer 5.32
41 TestAddons/parallel/CSI 54.1
42 TestAddons/parallel/Headlamp 2.58
43 TestAddons/parallel/CloudSpanner 5.27
44 TestAddons/parallel/LocalPath 10.16
45 TestAddons/parallel/NvidiaDevicePlugin 6.33
46 TestAddons/parallel/Yakd 5.25
47 TestAddons/parallel/AmdGpuDevicePlugin 6.25
97 TestFunctional/parallel/ServiceCmdConnect 603.01
114 TestFunctional/parallel/ServiceCmd/DeployApp 600.63
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.87
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.73
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.34
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
153 TestFunctional/parallel/ServiceCmd/Format 0.54
154 TestFunctional/parallel/ServiceCmd/URL 0.54
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 433.62
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.66
191 TestJSONOutput/pause/Command 1.98
197 TestJSONOutput/unpause/Command 1.82
286 TestPause/serial/Pause 6.25
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.3
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.29
313 TestStartStop/group/old-k8s-version/serial/Pause 5.89
319 TestStartStop/group/no-preload/serial/Pause 6.33
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.31
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.1
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.56
339 TestStartStop/group/newest-cni/serial/Pause 6.28
352 TestStartStop/group/embed-certs/serial/Pause 5.95
360 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.97
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 addons disable volcano --alsologtostderr -v=1: exit status 11 (256.564397ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:10:46.052863  368396 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:10:46.053152  368396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:10:46.053163  368396 out.go:374] Setting ErrFile to fd 2...
	I1115 09:10:46.053167  368396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:10:46.053339  368396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:10:46.053646  368396 mustload.go:66] Loading cluster: addons-454747
	I1115 09:10:46.053985  368396 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:10:46.054000  368396 addons.go:607] checking whether the cluster is paused
	I1115 09:10:46.054076  368396 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:10:46.054090  368396 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:10:46.054457  368396 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:10:46.072664  368396 ssh_runner.go:195] Run: systemctl --version
	I1115 09:10:46.072726  368396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:10:46.092268  368396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:10:46.185161  368396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:10:46.185247  368396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:10:46.216083  368396 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:10:46.216105  368396 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:10:46.216109  368396 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:10:46.216112  368396 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:10:46.216115  368396 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:10:46.216128  368396 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:10:46.216131  368396 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:10:46.216134  368396 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:10:46.216136  368396 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:10:46.216141  368396 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:10:46.216144  368396 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:10:46.216147  368396 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:10:46.216149  368396 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:10:46.216152  368396 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:10:46.216155  368396 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:10:46.216162  368396 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:10:46.216167  368396 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:10:46.216171  368396 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:10:46.216174  368396 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:10:46.216176  368396 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:10:46.216179  368396 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:10:46.216181  368396 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:10:46.216183  368396 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:10:46.216186  368396 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:10:46.216188  368396 cri.go:89] found id: ""
	I1115 09:10:46.216235  368396 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:10:46.231051  368396 out.go:203] 
	W1115 09:10:46.232197  368396 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:10:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:10:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:10:46.232222  368396 out.go:285] * 
	* 
	W1115 09:10:46.236320  368396 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:10:46.237698  368396 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-454747 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.464047ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-mqjdw" [7ed7e9cf-6050-4f40-b957-f78707890861] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003036042s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-pspnm" [4fe4b793-40d0-4349-955b-fce89850d82b] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003001442s
addons_test.go:392: (dbg) Run:  kubectl --context addons-454747 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-454747 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-454747 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.074393124s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 ip
2025/11/15 09:11:12 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 addons disable registry --alsologtostderr -v=1: exit status 11 (243.698103ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:11:12.408707  369861 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:11:12.408955  369861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:12.408963  369861 out.go:374] Setting ErrFile to fd 2...
	I1115 09:11:12.408967  369861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:12.409187  369861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:11:12.409458  369861 mustload.go:66] Loading cluster: addons-454747
	I1115 09:11:12.409811  369861 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:12.409828  369861 addons.go:607] checking whether the cluster is paused
	I1115 09:11:12.409930  369861 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:12.409944  369861 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:11:12.410377  369861 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:11:12.428370  369861 ssh_runner.go:195] Run: systemctl --version
	I1115 09:11:12.428460  369861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:11:12.445951  369861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:11:12.539066  369861 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:11:12.539156  369861 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:11:12.571081  369861 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:11:12.571115  369861 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:11:12.571119  369861 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:11:12.571122  369861 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:11:12.571124  369861 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:11:12.571128  369861 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:11:12.571131  369861 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:11:12.571133  369861 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:11:12.571135  369861 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:11:12.571156  369861 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:11:12.571161  369861 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:11:12.571163  369861 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:11:12.571166  369861 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:11:12.571168  369861 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:11:12.571170  369861 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:11:12.571181  369861 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:11:12.571187  369861 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:11:12.571192  369861 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:11:12.571195  369861 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:11:12.571197  369861 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:11:12.571199  369861 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:11:12.571201  369861 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:11:12.571204  369861 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:11:12.571206  369861 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:11:12.571208  369861 cri.go:89] found id: ""
	I1115 09:11:12.571260  369861 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:11:12.585609  369861 out.go:203] 
	W1115 09:11:12.586877  369861 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:11:12.586897  369861 out.go:285] * 
	* 
	W1115 09:11:12.590891  369861 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:11:12.592145  369861 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-454747 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.55s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.43s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.42115ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-454747
addons_test.go:332: (dbg) Run:  kubectl --context addons-454747 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (255.866533ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:11:19.040172  371532 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:11:19.040287  371532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:19.040298  371532 out.go:374] Setting ErrFile to fd 2...
	I1115 09:11:19.040306  371532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:19.040535  371532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:11:19.040862  371532 mustload.go:66] Loading cluster: addons-454747
	I1115 09:11:19.041283  371532 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:19.041303  371532 addons.go:607] checking whether the cluster is paused
	I1115 09:11:19.041433  371532 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:19.041459  371532 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:11:19.041891  371532 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:11:19.061747  371532 ssh_runner.go:195] Run: systemctl --version
	I1115 09:11:19.061811  371532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:11:19.081481  371532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:11:19.175755  371532 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:11:19.175832  371532 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:11:19.205763  371532 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:11:19.205806  371532 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:11:19.205814  371532 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:11:19.205819  371532 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:11:19.205824  371532 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:11:19.205829  371532 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:11:19.205834  371532 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:11:19.205838  371532 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:11:19.205843  371532 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:11:19.205857  371532 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:11:19.205862  371532 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:11:19.205865  371532 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:11:19.205867  371532 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:11:19.205870  371532 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:11:19.205873  371532 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:11:19.205882  371532 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:11:19.205887  371532 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:11:19.205891  371532 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:11:19.205893  371532 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:11:19.205896  371532 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:11:19.205900  371532 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:11:19.205902  371532 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:11:19.205905  371532 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:11:19.205907  371532 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:11:19.205910  371532 cri.go:89] found id: ""
	I1115 09:11:19.205948  371532 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:11:19.219877  371532 out.go:203] 
	W1115 09:11:19.221250  371532 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:11:19.221271  371532 out.go:285] * 
	* 
	W1115 09:11:19.225789  371532 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:11:19.228644  371532 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-454747 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.43s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-454747 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-454747 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-454747 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [05b11fbe-56e5-4a05-b781-867491771b80] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [05b11fbe-56e5-4a05-b781-867491771b80] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004025363s
I1115 09:11:23.401173  359063 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.134304953s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-454747 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-454747
helpers_test.go:243: (dbg) docker inspect addons-454747:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "931c889a25ac72d1510501361929e13fb49aacd67c533ffadd760b636c2a8ea3",
	        "Created": "2025-11-15T09:08:53.071755917Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 361079,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:08:53.105106011Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/931c889a25ac72d1510501361929e13fb49aacd67c533ffadd760b636c2a8ea3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/931c889a25ac72d1510501361929e13fb49aacd67c533ffadd760b636c2a8ea3/hostname",
	        "HostsPath": "/var/lib/docker/containers/931c889a25ac72d1510501361929e13fb49aacd67c533ffadd760b636c2a8ea3/hosts",
	        "LogPath": "/var/lib/docker/containers/931c889a25ac72d1510501361929e13fb49aacd67c533ffadd760b636c2a8ea3/931c889a25ac72d1510501361929e13fb49aacd67c533ffadd760b636c2a8ea3-json.log",
	        "Name": "/addons-454747",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-454747:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-454747",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "931c889a25ac72d1510501361929e13fb49aacd67c533ffadd760b636c2a8ea3",
	                "LowerDir": "/var/lib/docker/overlay2/98f418e46e4671b796ba0b1d33ac71bdb56f8d7d4259cc43606a461ab77d1226-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98f418e46e4671b796ba0b1d33ac71bdb56f8d7d4259cc43606a461ab77d1226/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98f418e46e4671b796ba0b1d33ac71bdb56f8d7d4259cc43606a461ab77d1226/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98f418e46e4671b796ba0b1d33ac71bdb56f8d7d4259cc43606a461ab77d1226/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-454747",
	                "Source": "/var/lib/docker/volumes/addons-454747/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-454747",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-454747",
	                "name.minikube.sigs.k8s.io": "addons-454747",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5c927aa05be2b299cb7cb65e10fa57832d3fe83b5685f4f2d37af98648fb98a8",
	            "SandboxKey": "/var/run/docker/netns/5c927aa05be2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-454747": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e7d1342f2c11565f602c3bd0dfb2d31a9a92160d201bf9a893b8dc748fe9244f",
	                    "EndpointID": "7ca09f174830ce94b08255c7ccb6cba5d49ce52a1573a670c226f2e89ceaf912",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "f2:a7:ec:2c:70:a1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-454747",
	                        "931c889a25ac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-454747 -n addons-454747
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-454747 logs -n 25: (1.187099458s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-730212 --alsologtostderr --binary-mirror http://127.0.0.1:42111 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-730212 │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │                     │
	│ delete  │ -p binary-mirror-730212                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-730212 │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │ 15 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p addons-454747                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │                     │
	│ addons  │ disable dashboard -p addons-454747                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │                     │
	│ start   │ -p addons-454747 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │ 15 Nov 25 09:10 UTC │
	│ addons  │ addons-454747 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:10 UTC │                     │
	│ addons  │ addons-454747 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:10 UTC │                     │
	│ addons  │ addons-454747 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ ssh     │ addons-454747 ssh cat /opt/local-path-provisioner/pvc-cb0fe8e1-5280-47d2-a0f7-3e04a804af72_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │ 15 Nov 25 09:11 UTC │
	│ addons  │ addons-454747 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ addons  │ addons-454747 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ ip      │ addons-454747 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │ 15 Nov 25 09:11 UTC │
	│ addons  │ addons-454747 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ addons  │ addons-454747 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ addons  │ addons-454747 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ addons  │ addons-454747 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ addons  │ enable headlamp -p addons-454747 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ addons  │ addons-454747 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-454747                                                                                                                                                                                                                                                                                                                                                                                           │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │ 15 Nov 25 09:11 UTC │
	│ addons  │ addons-454747 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ addons  │ addons-454747 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ ssh     │ addons-454747 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ addons  │ addons-454747 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ addons  │ addons-454747 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ ip      │ addons-454747 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-454747        │ jenkins │ v1.37.0 │ 15 Nov 25 09:13 UTC │ 15 Nov 25 09:13 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:08:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:08:30.520592  360443 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:08:30.520894  360443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:08:30.520905  360443 out.go:374] Setting ErrFile to fd 2...
	I1115 09:08:30.520910  360443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:08:30.521138  360443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:08:30.521757  360443 out.go:368] Setting JSON to false
	I1115 09:08:30.522770  360443 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3051,"bootTime":1763194659,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:08:30.522883  360443 start.go:143] virtualization: kvm guest
	I1115 09:08:30.524759  360443 out.go:179] * [addons-454747] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:08:30.526035  360443 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:08:30.526034  360443 notify.go:221] Checking for updates...
	I1115 09:08:30.527591  360443 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:08:30.529136  360443 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:08:30.530442  360443 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 09:08:30.531774  360443 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:08:30.532907  360443 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:08:30.534245  360443 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:08:30.558319  360443 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:08:30.558422  360443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:08:30.614104  360443 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-15 09:08:30.604256949 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:08:30.614223  360443 docker.go:319] overlay module found
	I1115 09:08:30.615847  360443 out.go:179] * Using the docker driver based on user configuration
	I1115 09:08:30.617075  360443 start.go:309] selected driver: docker
	I1115 09:08:30.617093  360443 start.go:930] validating driver "docker" against <nil>
	I1115 09:08:30.617106  360443 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:08:30.617714  360443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:08:30.676046  360443 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-15 09:08:30.665694421 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:08:30.676198  360443 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:08:30.676456  360443 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:08:30.678733  360443 out.go:179] * Using Docker driver with root privileges
	I1115 09:08:30.680132  360443 cni.go:84] Creating CNI manager for ""
	I1115 09:08:30.680218  360443 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:08:30.680231  360443 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 09:08:30.680321  360443 start.go:353] cluster config:
	{Name:addons-454747 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-454747 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1115 09:08:30.681867  360443 out.go:179] * Starting "addons-454747" primary control-plane node in "addons-454747" cluster
	I1115 09:08:30.683166  360443 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:08:30.684497  360443 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:08:30.685561  360443 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:08:30.685610  360443 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 09:08:30.685640  360443 cache.go:65] Caching tarball of preloaded images
	I1115 09:08:30.685662  360443 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:08:30.685756  360443 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:08:30.685775  360443 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:08:30.686190  360443 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/config.json ...
	I1115 09:08:30.686223  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/config.json: {Name:mk47730805923e8dabc6c0167b68b1e7cdaa8bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:08:30.703537  360443 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 09:08:30.703680  360443 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1115 09:08:30.703705  360443 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1115 09:08:30.703709  360443 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1115 09:08:30.703721  360443 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1115 09:08:30.703726  360443 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1115 09:08:44.551564  360443 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1115 09:08:44.551619  360443 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:08:44.551665  360443 start.go:360] acquireMachinesLock for addons-454747: {Name:mk2e6cf2df2df659fccf71860e02c2b25f7f44a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:08:44.551794  360443 start.go:364] duration metric: took 99.288µs to acquireMachinesLock for "addons-454747"
	I1115 09:08:44.551827  360443 start.go:93] Provisioning new machine with config: &{Name:addons-454747 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-454747 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:08:44.551937  360443 start.go:125] createHost starting for "" (driver="docker")
	I1115 09:08:44.553730  360443 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1115 09:08:44.553985  360443 start.go:159] libmachine.API.Create for "addons-454747" (driver="docker")
	I1115 09:08:44.554021  360443 client.go:173] LocalClient.Create starting
	I1115 09:08:44.554120  360443 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem
	I1115 09:08:44.846755  360443 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem
	I1115 09:08:44.886166  360443 cli_runner.go:164] Run: docker network inspect addons-454747 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 09:08:44.903247  360443 cli_runner.go:211] docker network inspect addons-454747 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 09:08:44.903344  360443 network_create.go:284] running [docker network inspect addons-454747] to gather additional debugging logs...
	I1115 09:08:44.903369  360443 cli_runner.go:164] Run: docker network inspect addons-454747
	W1115 09:08:44.920264  360443 cli_runner.go:211] docker network inspect addons-454747 returned with exit code 1
	I1115 09:08:44.920314  360443 network_create.go:287] error running [docker network inspect addons-454747]: docker network inspect addons-454747: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-454747 not found
	I1115 09:08:44.920327  360443 network_create.go:289] output of [docker network inspect addons-454747]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-454747 not found
	
	** /stderr **
	I1115 09:08:44.920534  360443 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:08:44.937797  360443 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e82e80}
	I1115 09:08:44.937854  360443 network_create.go:124] attempt to create docker network addons-454747 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1115 09:08:44.937910  360443 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-454747 addons-454747
	I1115 09:08:44.984019  360443 network_create.go:108] docker network addons-454747 192.168.49.0/24 created
	I1115 09:08:44.984056  360443 kic.go:121] calculated static IP "192.168.49.2" for the "addons-454747" container
	I1115 09:08:44.984119  360443 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 09:08:45.000796  360443 cli_runner.go:164] Run: docker volume create addons-454747 --label name.minikube.sigs.k8s.io=addons-454747 --label created_by.minikube.sigs.k8s.io=true
	I1115 09:08:45.020696  360443 oci.go:103] Successfully created a docker volume addons-454747
	I1115 09:08:45.020811  360443 cli_runner.go:164] Run: docker run --rm --name addons-454747-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-454747 --entrypoint /usr/bin/test -v addons-454747:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 09:08:48.698730  360443 cli_runner.go:217] Completed: docker run --rm --name addons-454747-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-454747 --entrypoint /usr/bin/test -v addons-454747:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (3.677857103s)
	I1115 09:08:48.698770  360443 oci.go:107] Successfully prepared a docker volume addons-454747
	I1115 09:08:48.698848  360443 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:08:48.698861  360443 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 09:08:48.698921  360443 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-454747:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 09:08:52.999414  360443 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-454747:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.300433631s)
	I1115 09:08:52.999451  360443 kic.go:203] duration metric: took 4.300585717s to extract preloaded images to volume ...
	W1115 09:08:52.999567  360443 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1115 09:08:52.999624  360443 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1115 09:08:52.999670  360443 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 09:08:53.055152  360443 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-454747 --name addons-454747 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-454747 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-454747 --network addons-454747 --ip 192.168.49.2 --volume addons-454747:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 09:08:53.341958  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Running}}
	I1115 09:08:53.360788  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:08:53.379124  360443 cli_runner.go:164] Run: docker exec addons-454747 stat /var/lib/dpkg/alternatives/iptables
	I1115 09:08:53.429118  360443 oci.go:144] the created container "addons-454747" has a running status.
	I1115 09:08:53.429156  360443 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa...
	I1115 09:08:53.498032  360443 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 09:08:53.525547  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:08:53.542965  360443 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 09:08:53.542983  360443 kic_runner.go:114] Args: [docker exec --privileged addons-454747 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 09:08:53.611516  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:08:53.631809  360443 machine.go:94] provisionDockerMachine start ...
	I1115 09:08:53.631944  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:53.658493  360443 main.go:143] libmachine: Using SSH client type: native
	I1115 09:08:53.658863  360443 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1115 09:08:53.658887  360443 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:08:53.659755  360443 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60756->127.0.0.1:33144: read: connection reset by peer
	I1115 09:08:56.792073  360443 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-454747
	
	I1115 09:08:56.792119  360443 ubuntu.go:182] provisioning hostname "addons-454747"
	I1115 09:08:56.792187  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:56.811097  360443 main.go:143] libmachine: Using SSH client type: native
	I1115 09:08:56.811385  360443 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1115 09:08:56.811424  360443 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-454747 && echo "addons-454747" | sudo tee /etc/hostname
	I1115 09:08:56.951043  360443 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-454747
	
	I1115 09:08:56.951132  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:56.970355  360443 main.go:143] libmachine: Using SSH client type: native
	I1115 09:08:56.970648  360443 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1115 09:08:56.970675  360443 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-454747' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-454747/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-454747' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:08:57.101811  360443 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:08:57.101854  360443 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:08:57.101887  360443 ubuntu.go:190] setting up certificates
	I1115 09:08:57.101904  360443 provision.go:84] configureAuth start
	I1115 09:08:57.101981  360443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-454747
	I1115 09:08:57.123291  360443 provision.go:143] copyHostCerts
	I1115 09:08:57.123409  360443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:08:57.123571  360443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:08:57.123803  360443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:08:57.123921  360443 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.addons-454747 san=[127.0.0.1 192.168.49.2 addons-454747 localhost minikube]
	I1115 09:08:57.400263  360443 provision.go:177] copyRemoteCerts
	I1115 09:08:57.400348  360443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:08:57.400387  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:57.419834  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:08:57.515235  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:08:57.535650  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:08:57.554023  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 09:08:57.571921  360443 provision.go:87] duration metric: took 469.992652ms to configureAuth
	I1115 09:08:57.571950  360443 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:08:57.572132  360443 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:08:57.572240  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:57.591068  360443 main.go:143] libmachine: Using SSH client type: native
	I1115 09:08:57.591298  360443 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1115 09:08:57.591314  360443 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:08:57.840428  360443 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:08:57.840494  360443 machine.go:97] duration metric: took 4.208654448s to provisionDockerMachine
	I1115 09:08:57.840510  360443 client.go:176] duration metric: took 13.28648022s to LocalClient.Create
	I1115 09:08:57.840537  360443 start.go:167] duration metric: took 13.286552258s to libmachine.API.Create "addons-454747"
	I1115 09:08:57.840547  360443 start.go:293] postStartSetup for "addons-454747" (driver="docker")
	I1115 09:08:57.840565  360443 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:08:57.840632  360443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:08:57.840684  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:57.858994  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:08:57.955912  360443 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:08:57.959755  360443 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:08:57.959782  360443 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:08:57.959794  360443 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:08:57.959857  360443 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:08:57.959881  360443 start.go:296] duration metric: took 119.326869ms for postStartSetup
	I1115 09:08:57.960174  360443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-454747
	I1115 09:08:57.979354  360443 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/config.json ...
	I1115 09:08:57.979662  360443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:08:57.979710  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:57.997384  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:08:58.088969  360443 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:08:58.093792  360443 start.go:128] duration metric: took 13.541816792s to createHost
	I1115 09:08:58.093822  360443 start.go:83] releasing machines lock for "addons-454747", held for 13.542012001s
	I1115 09:08:58.093926  360443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-454747
	I1115 09:08:58.112306  360443 ssh_runner.go:195] Run: cat /version.json
	I1115 09:08:58.112360  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:58.112462  360443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:08:58.112560  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:58.131279  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:08:58.131849  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:08:58.278347  360443 ssh_runner.go:195] Run: systemctl --version
	I1115 09:08:58.284841  360443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:08:58.320500  360443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:08:58.325496  360443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:08:58.325565  360443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:08:58.353369  360443 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 09:08:58.353406  360443 start.go:496] detecting cgroup driver to use...
	I1115 09:08:58.353445  360443 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:08:58.353505  360443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:08:58.370580  360443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:08:58.382815  360443 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:08:58.382876  360443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:08:58.398955  360443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:08:58.416736  360443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:08:58.498408  360443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:08:58.587970  360443 docker.go:234] disabling docker service ...
	I1115 09:08:58.588042  360443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:08:58.607652  360443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:08:58.620908  360443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:08:58.707145  360443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:08:58.789765  360443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:08:58.802580  360443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:08:58.816215  360443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:08:58.816272  360443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:08:58.826818  360443 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:08:58.826882  360443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:08:58.835988  360443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:08:58.844958  360443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:08:58.853802  360443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:08:58.862273  360443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:08:58.871528  360443 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:08:58.885142  360443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:08:58.894331  360443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:08:58.902157  360443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:08:58.909632  360443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:08:58.988834  360443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:08:59.096088  360443 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:08:59.096176  360443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:08:59.100178  360443 start.go:564] Will wait 60s for crictl version
	I1115 09:08:59.100232  360443 ssh_runner.go:195] Run: which crictl
	I1115 09:08:59.103976  360443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:08:59.128578  360443 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:08:59.128692  360443 ssh_runner.go:195] Run: crio --version
	I1115 09:08:59.157293  360443 ssh_runner.go:195] Run: crio --version
	I1115 09:08:59.188168  360443 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:08:59.189679  360443 cli_runner.go:164] Run: docker network inspect addons-454747 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:08:59.207620  360443 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:08:59.211932  360443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:08:59.222601  360443 kubeadm.go:884] updating cluster {Name:addons-454747 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-454747 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:08:59.222807  360443 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:08:59.222855  360443 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:08:59.254925  360443 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:08:59.254948  360443 crio.go:433] Images already preloaded, skipping extraction
	I1115 09:08:59.254995  360443 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:08:59.282354  360443 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:08:59.282385  360443 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:08:59.282408  360443 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 09:08:59.282514  360443 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-454747 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-454747 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:08:59.282603  360443 ssh_runner.go:195] Run: crio config
	I1115 09:08:59.329698  360443 cni.go:84] Creating CNI manager for ""
	I1115 09:08:59.329723  360443 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:08:59.329754  360443 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:08:59.329784  360443 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-454747 NodeName:addons-454747 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:08:59.329968  360443 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-454747"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:08:59.330048  360443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:08:59.338274  360443 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:08:59.338342  360443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:08:59.346278  360443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:08:59.359786  360443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:08:59.375261  360443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1115 09:08:59.388379  360443 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1115 09:08:59.392201  360443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:08:59.403078  360443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:08:59.483000  360443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:08:59.508041  360443 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747 for IP: 192.168.49.2
	I1115 09:08:59.508068  360443 certs.go:195] generating shared ca certs ...
	I1115 09:08:59.508087  360443 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:08:59.508231  360443 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:08:59.661356  360443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt ...
	I1115 09:08:59.661402  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt: {Name:mkf1de4e8a78ad57f64e4139f594a98d52310695 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:08:59.661592  360443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key ...
	I1115 09:08:59.661605  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key: {Name:mk31505a0317517b998de0b0f06cb2b6b31f4e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:08:59.661681  360443 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:08:59.734298  360443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt ...
	I1115 09:08:59.734324  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt: {Name:mk61320cad84fd3ba4ccac41f30e7dc5aecf90ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:08:59.734527  360443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key ...
	I1115 09:08:59.734549  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key: {Name:mke3e6a615bf275abcd57bdc4cb81bfd7c5e6f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:08:59.734648  360443 certs.go:257] generating profile certs ...
	I1115 09:08:59.734718  360443 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.key
	I1115 09:08:59.734732  360443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt with IP's: []
	I1115 09:09:00.089596  360443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt ...
	I1115 09:09:00.089627  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: {Name:mk3cd13bba85bc95005ef2728ab8d27051685829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:00.089805  360443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.key ...
	I1115 09:09:00.089818  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.key: {Name:mk90474fe0cb9333f9149c33a4f5fd0fe06dd9e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:00.089890  360443 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.key.5b973845
	I1115 09:09:00.089909  360443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.crt.5b973845 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1115 09:09:00.324938  360443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.crt.5b973845 ...
	I1115 09:09:00.324967  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.crt.5b973845: {Name:mkf21ac2d95f37eea0c922fbb7d554c2f3dd46e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:00.325129  360443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.key.5b973845 ...
	I1115 09:09:00.325142  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.key.5b973845: {Name:mk193a24957f0f76901390e7a684e487923039a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:00.325215  360443 certs.go:382] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.crt.5b973845 -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.crt
	I1115 09:09:00.325291  360443 certs.go:386] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.key.5b973845 -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.key
	I1115 09:09:00.325339  360443 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.key
	I1115 09:09:00.325357  360443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.crt with IP's: []
	I1115 09:09:00.820322  360443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.crt ...
	I1115 09:09:00.820356  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.crt: {Name:mk81ea4b79c506c3383e76f0970fe543f86962b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:00.820571  360443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.key ...
	I1115 09:09:00.820588  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.key: {Name:mk881c7f1bb83460acc56df5bfc62da91bb98187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:00.820762  360443 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:09:00.820797  360443 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:09:00.820821  360443 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:09:00.820842  360443 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:09:00.821526  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:09:00.840316  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:09:00.858018  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:09:00.875581  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:09:00.892959  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 09:09:00.911040  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:09:00.929789  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:09:00.948060  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 09:09:00.966492  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:09:00.985825  360443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:09:00.998726  360443 ssh_runner.go:195] Run: openssl version
	I1115 09:09:01.005602  360443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:09:01.017630  360443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:09:01.021880  360443 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:09:01.021951  360443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:09:01.059536  360443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:09:01.069995  360443 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:09:01.073709  360443 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:09:01.073769  360443 kubeadm.go:401] StartCluster: {Name:addons-454747 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-454747 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:09:01.073853  360443 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:09:01.073901  360443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:09:01.102042  360443 cri.go:89] found id: ""
	I1115 09:09:01.102118  360443 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:09:01.110701  360443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:09:01.118940  360443 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 09:09:01.119043  360443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:09:01.127032  360443 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 09:09:01.127050  360443 kubeadm.go:158] found existing configuration files:
	
	I1115 09:09:01.127100  360443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 09:09:01.134997  360443 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 09:09:01.135067  360443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 09:09:01.142873  360443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 09:09:01.150916  360443 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 09:09:01.150972  360443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:09:01.158666  360443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 09:09:01.166665  360443 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 09:09:01.166739  360443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:09:01.174611  360443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 09:09:01.183030  360443 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 09:09:01.183089  360443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:09:01.191356  360443 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 09:09:01.250110  360443 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 09:09:01.308835  360443 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 09:09:11.413706  360443 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 09:09:11.413773  360443 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 09:09:11.413890  360443 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 09:09:11.413993  360443 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 09:09:11.414066  360443 kubeadm.go:319] OS: Linux
	I1115 09:09:11.414142  360443 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 09:09:11.414213  360443 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 09:09:11.414284  360443 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 09:09:11.414360  360443 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 09:09:11.414449  360443 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 09:09:11.414523  360443 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 09:09:11.414600  360443 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 09:09:11.414670  360443 kubeadm.go:319] CGROUPS_IO: enabled
	I1115 09:09:11.414790  360443 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 09:09:11.414902  360443 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 09:09:11.415009  360443 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 09:09:11.415114  360443 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 09:09:11.417550  360443 out.go:252]   - Generating certificates and keys ...
	I1115 09:09:11.417634  360443 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 09:09:11.417728  360443 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 09:09:11.417809  360443 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 09:09:11.417889  360443 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 09:09:11.417958  360443 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 09:09:11.418028  360443 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 09:09:11.418105  360443 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 09:09:11.418343  360443 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-454747 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 09:09:11.418454  360443 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 09:09:11.418557  360443 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-454747 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 09:09:11.418636  360443 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 09:09:11.418713  360443 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 09:09:11.418779  360443 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 09:09:11.418861  360443 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 09:09:11.418904  360443 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 09:09:11.418953  360443 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 09:09:11.419003  360443 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 09:09:11.419070  360443 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 09:09:11.419140  360443 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 09:09:11.419230  360443 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 09:09:11.419322  360443 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 09:09:11.420630  360443 out.go:252]   - Booting up control plane ...
	I1115 09:09:11.420715  360443 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 09:09:11.420800  360443 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 09:09:11.420861  360443 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 09:09:11.420965  360443 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 09:09:11.421058  360443 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 09:09:11.421173  360443 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 09:09:11.421294  360443 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 09:09:11.421333  360443 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 09:09:11.421465  360443 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 09:09:11.421621  360443 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 09:09:11.421704  360443 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000963177s
	I1115 09:09:11.421821  360443 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 09:09:11.421928  360443 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1115 09:09:11.422031  360443 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 09:09:11.422133  360443 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 09:09:11.422202  360443 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.559175368s
	I1115 09:09:11.422259  360443 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.241410643s
	I1115 09:09:11.422324  360443 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00226888s
	I1115 09:09:11.422443  360443 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 09:09:11.422596  360443 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 09:09:11.422670  360443 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 09:09:11.422837  360443 kubeadm.go:319] [mark-control-plane] Marking the node addons-454747 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 09:09:11.422911  360443 kubeadm.go:319] [bootstrap-token] Using token: iog1xk.8n83pbeopade97db
	I1115 09:09:11.424318  360443 out.go:252]   - Configuring RBAC rules ...
	I1115 09:09:11.424454  360443 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 09:09:11.424557  360443 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 09:09:11.424714  360443 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 09:09:11.424838  360443 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 09:09:11.424940  360443 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 09:09:11.425022  360443 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 09:09:11.425183  360443 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 09:09:11.425247  360443 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 09:09:11.425321  360443 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 09:09:11.425333  360443 kubeadm.go:319] 
	I1115 09:09:11.425447  360443 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 09:09:11.425457  360443 kubeadm.go:319] 
	I1115 09:09:11.425568  360443 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 09:09:11.425576  360443 kubeadm.go:319] 
	I1115 09:09:11.425597  360443 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 09:09:11.425648  360443 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 09:09:11.425700  360443 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 09:09:11.425713  360443 kubeadm.go:319] 
	I1115 09:09:11.425794  360443 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 09:09:11.425803  360443 kubeadm.go:319] 
	I1115 09:09:11.425875  360443 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 09:09:11.425883  360443 kubeadm.go:319] 
	I1115 09:09:11.425958  360443 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 09:09:11.426063  360443 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 09:09:11.426140  360443 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 09:09:11.426148  360443 kubeadm.go:319] 
	I1115 09:09:11.426220  360443 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 09:09:11.426287  360443 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 09:09:11.426292  360443 kubeadm.go:319] 
	I1115 09:09:11.426357  360443 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token iog1xk.8n83pbeopade97db \
	I1115 09:09:11.426472  360443 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac \
	I1115 09:09:11.426501  360443 kubeadm.go:319] 	--control-plane 
	I1115 09:09:11.426507  360443 kubeadm.go:319] 
	I1115 09:09:11.426592  360443 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 09:09:11.426604  360443 kubeadm.go:319] 
	I1115 09:09:11.426673  360443 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token iog1xk.8n83pbeopade97db \
	I1115 09:09:11.426779  360443 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac 
	I1115 09:09:11.426789  360443 cni.go:84] Creating CNI manager for ""
	I1115 09:09:11.426795  360443 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:09:11.428292  360443 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 09:09:11.429459  360443 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 09:09:11.433869  360443 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 09:09:11.433890  360443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 09:09:11.446740  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 09:09:11.647335  360443 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 09:09:11.647445  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:11.647455  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-454747 minikube.k8s.io/updated_at=2025_11_15T09_09_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=addons-454747 minikube.k8s.io/primary=true
	I1115 09:09:11.660140  360443 ops.go:34] apiserver oom_adj: -16
	I1115 09:09:11.723736  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:12.224188  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:12.724492  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:13.223989  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:13.723801  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:14.224584  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:14.724113  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:15.224164  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:15.724243  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:16.224450  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:16.289205  360443 kubeadm.go:1114] duration metric: took 4.641853418s to wait for elevateKubeSystemPrivileges
	I1115 09:09:16.289238  360443 kubeadm.go:403] duration metric: took 15.215474747s to StartCluster
	I1115 09:09:16.289259  360443 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:16.289409  360443 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:09:16.289938  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:16.290164  360443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 09:09:16.290180  360443 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:09:16.290251  360443 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1115 09:09:16.290468  360443 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-454747"
	I1115 09:09:16.290491  360443 addons.go:70] Setting default-storageclass=true in profile "addons-454747"
	I1115 09:09:16.290502  360443 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:09:16.290516  360443 addons.go:70] Setting registry=true in profile "addons-454747"
	I1115 09:09:16.290537  360443 addons.go:70] Setting registry-creds=true in profile "addons-454747"
	I1115 09:09:16.290539  360443 addons.go:70] Setting storage-provisioner=true in profile "addons-454747"
	I1115 09:09:16.290508  360443 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-454747"
	I1115 09:09:16.290558  360443 addons.go:239] Setting addon storage-provisioner=true in "addons-454747"
	I1115 09:09:16.290558  360443 addons.go:70] Setting volcano=true in profile "addons-454747"
	I1115 09:09:16.290565  360443 addons.go:70] Setting gcp-auth=true in profile "addons-454747"
	I1115 09:09:16.290500  360443 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-454747"
	I1115 09:09:16.290573  360443 addons.go:239] Setting addon volcano=true in "addons-454747"
	I1115 09:09:16.290574  360443 addons.go:239] Setting addon registry-creds=true in "addons-454747"
	I1115 09:09:16.290580  360443 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-454747"
	I1115 09:09:16.290589  360443 addons.go:70] Setting ingress=true in profile "addons-454747"
	I1115 09:09:16.290603  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.290538  360443 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-454747"
	I1115 09:09:16.290608  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.290611  360443 addons.go:239] Setting addon ingress=true in "addons-454747"
	I1115 09:09:16.290545  360443 addons.go:70] Setting cloud-spanner=true in profile "addons-454747"
	I1115 09:09:16.290624  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.290628  360443 addons.go:239] Setting addon cloud-spanner=true in "addons-454747"
	I1115 09:09:16.290633  360443 addons.go:70] Setting metrics-server=true in profile "addons-454747"
	I1115 09:09:16.290641  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.290646  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.290648  360443 addons.go:239] Setting addon metrics-server=true in "addons-454747"
	I1115 09:09:16.290689  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.290578  360443 addons.go:70] Setting ingress-dns=true in profile "addons-454747"
	I1115 09:09:16.291001  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.291008  360443 addons.go:239] Setting addon ingress-dns=true in "addons-454747"
	I1115 09:09:16.291050  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.291199  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.291236  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.291236  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.291278  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.291537  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.291800  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.291912  360443 addons.go:70] Setting volumesnapshots=true in profile "addons-454747"
	I1115 09:09:16.291929  360443 addons.go:239] Setting addon volumesnapshots=true in "addons-454747"
	I1115 09:09:16.291954  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.292477  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.290531  360443 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-454747"
	I1115 09:09:16.293712  360443 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-454747"
	I1115 09:09:16.293741  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.294222  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.290603  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.290604  360443 mustload.go:66] Loading cluster: addons-454747
	I1115 09:09:16.290528  360443 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-454747"
	I1115 09:09:16.295243  360443 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-454747"
	I1115 09:09:16.295292  360443 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:09:16.295557  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.295595  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.295964  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.290567  360443 addons.go:239] Setting addon registry=true in "addons-454747"
	I1115 09:09:16.298496  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.298576  360443 out.go:179] * Verifying Kubernetes components...
	I1115 09:09:16.290624  360443 addons.go:70] Setting inspektor-gadget=true in profile "addons-454747"
	I1115 09:09:16.298713  360443 addons.go:239] Setting addon inspektor-gadget=true in "addons-454747"
	I1115 09:09:16.290603  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.299341  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.290477  360443 addons.go:70] Setting yakd=true in profile "addons-454747"
	I1115 09:09:16.299668  360443 addons.go:239] Setting addon yakd=true in "addons-454747"
	I1115 09:09:16.299711  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.291807  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.300374  360443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:09:16.301663  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.307509  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.310000  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.311251  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.353959  360443 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1115 09:09:16.355245  360443 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:09:16.355268  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1115 09:09:16.355329  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.355562  360443 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1115 09:09:16.358747  360443 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1115 09:09:16.358767  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1115 09:09:16.358968  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.362868  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1115 09:09:16.364073  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1115 09:09:16.364831  360443 addons.go:239] Setting addon default-storageclass=true in "addons-454747"
	I1115 09:09:16.364901  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.365461  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.366324  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1115 09:09:16.370207  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1115 09:09:16.371795  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1115 09:09:16.372943  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.374537  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1115 09:09:16.380792  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1115 09:09:16.381898  360443 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1115 09:09:16.381918  360443 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1115 09:09:16.381996  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.387068  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1115 09:09:16.390062  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1115 09:09:16.392205  360443 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1115 09:09:16.392753  360443 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1115 09:09:16.393048  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.403657  360443 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-454747"
	I1115 09:09:16.403707  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.404923  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.406494  360443 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1115 09:09:16.407727  360443 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1115 09:09:16.409791  360443 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1115 09:09:16.409956  360443 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1115 09:09:16.410003  360443 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1115 09:09:16.410096  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.412058  360443 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:09:16.412080  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1115 09:09:16.412138  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.412494  360443 out.go:179]   - Using image docker.io/registry:3.0.0
	W1115 09:09:16.413237  360443 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1115 09:09:16.413862  360443 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1115 09:09:16.413879  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1115 09:09:16.413942  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.416108  360443 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1115 09:09:16.417499  360443 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:09:16.417582  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1115 09:09:16.417756  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.421523  360443 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:09:16.424477  360443 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1115 09:09:16.425363  360443 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:09:16.425385  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 09:09:16.425458  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.426076  360443 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1115 09:09:16.426104  360443 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1115 09:09:16.426159  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.426684  360443 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:09:16.427972  360443 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:09:16.429423  360443 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1115 09:09:16.431532  360443 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:09:16.432343  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1115 09:09:16.433490  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.433783  360443 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1115 09:09:16.433486  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.435011  360443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 09:09:16.435184  360443 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:09:16.435218  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1115 09:09:16.435288  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.451568  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.457927  360443 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1115 09:09:16.459351  360443 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:09:16.460421  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1115 09:09:16.460521  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.461563  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.463997  360443 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 09:09:16.464019  360443 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 09:09:16.464076  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.479087  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.494630  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.514420  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.517754  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.521186  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.521777  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.523485  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.536115  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.536902  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.538819  360443 out.go:179]   - Using image docker.io/busybox:stable
	W1115 09:09:16.539893  360443 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1115 09:09:16.539931  360443 retry.go:31] will retry after 234.836428ms: ssh: handshake failed: EOF
	I1115 09:09:16.541409  360443 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1115 09:09:16.543522  360443 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:09:16.543570  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1115 09:09:16.543636  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.546435  360443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:09:16.547816  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.551456  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	W1115 09:09:16.556117  360443 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1115 09:09:16.556358  360443 retry.go:31] will retry after 256.485753ms: ssh: handshake failed: EOF
	I1115 09:09:16.583654  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.649898  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1115 09:09:16.652465  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:09:16.656195  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:09:16.666894  360443 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1115 09:09:16.666931  360443 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1115 09:09:16.676158  360443 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1115 09:09:16.676190  360443 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1115 09:09:16.687672  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:09:16.687673  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:09:16.687843  360443 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1115 09:09:16.687856  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1115 09:09:16.707157  360443 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1115 09:09:16.707186  360443 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1115 09:09:16.715512  360443 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:09:16.715643  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1115 09:09:16.721169  360443 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1115 09:09:16.721194  360443 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1115 09:09:16.724644  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:09:16.730790  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:09:16.735701  360443 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1115 09:09:16.735773  360443 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1115 09:09:16.745889  360443 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1115 09:09:16.745919  360443 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1115 09:09:16.774846  360443 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1115 09:09:16.774881  360443 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1115 09:09:16.781815  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:09:16.791146  360443 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:09:16.791175  360443 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1115 09:09:16.793625  360443 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1115 09:09:16.793654  360443 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1115 09:09:16.794500  360443 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1115 09:09:16.794535  360443 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1115 09:09:16.801478  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:09:16.808935  360443 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1115 09:09:16.808962  360443 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1115 09:09:16.810041  360443 node_ready.go:35] waiting up to 6m0s for node "addons-454747" to be "Ready" ...
	I1115 09:09:16.811113  360443 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1115 09:09:16.838552  360443 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1115 09:09:16.838814  360443 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1115 09:09:16.840980  360443 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1115 09:09:16.841080  360443 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1115 09:09:16.867156  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:09:16.881654  360443 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:09:16.881745  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1115 09:09:16.902724  360443 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:09:16.902812  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1115 09:09:16.911088  360443 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1115 09:09:16.911121  360443 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1115 09:09:16.959750  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:09:16.960890  360443 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1115 09:09:16.960919  360443 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1115 09:09:16.992997  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:09:17.011847  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 09:09:17.021872  360443 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1115 09:09:17.021900  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1115 09:09:17.090886  360443 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1115 09:09:17.090917  360443 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1115 09:09:17.105211  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:09:17.178613  360443 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1115 09:09:17.178647  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1115 09:09:17.212176  360443 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1115 09:09:17.212208  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1115 09:09:17.265738  360443 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 09:09:17.265770  360443 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1115 09:09:17.292635  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 09:09:17.314409  360443 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-454747" context rescaled to 1 replicas
	I1115 09:09:17.998480  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.216618688s)
	I1115 09:09:17.998551  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.197027334s)
	I1115 09:09:17.998584  360443 addons.go:480] Verifying addon registry=true in "addons-454747"
	I1115 09:09:17.998615  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.131354939s)
	I1115 09:09:17.998641  360443 addons.go:480] Verifying addon metrics-server=true in "addons-454747"
	I1115 09:09:17.998783  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.03895073s)
	I1115 09:09:17.998935  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.268113693s)
	I1115 09:09:17.998956  360443 addons.go:480] Verifying addon ingress=true in "addons-454747"
	I1115 09:09:18.000467  360443 out.go:179] * Verifying registry addon...
	I1115 09:09:18.000494  360443 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-454747 service yakd-dashboard -n yakd-dashboard
	
	I1115 09:09:18.001203  360443 out.go:179] * Verifying ingress addon...
	I1115 09:09:18.002931  360443 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1115 09:09:18.004140  360443 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1115 09:09:18.006211  360443 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 09:09:18.006235  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:18.006913  360443 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1115 09:09:18.006928  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:18.309304  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.316258189s)
	I1115 09:09:18.309352  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.297468733s)
	W1115 09:09:18.309369  360443 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:09:18.309414  360443 retry.go:31] will retry after 131.526232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:09:18.309444  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.204195325s)
	I1115 09:09:18.309643  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.016970353s)
	I1115 09:09:18.309673  360443 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-454747"
	I1115 09:09:18.311138  360443 out.go:179] * Verifying csi-hostpath-driver addon...
	I1115 09:09:18.313824  360443 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1115 09:09:18.316100  360443 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 09:09:18.316119  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:18.441679  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:09:18.506033  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:18.506667  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1115 09:09:18.813605  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:18.817053  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:19.007068  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:19.007262  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:19.317557  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:19.506984  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:19.507307  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:19.816404  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:20.006947  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:20.007115  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:20.317173  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:20.506594  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:20.507047  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:20.817110  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:20.936954  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.495231549s)
	I1115 09:09:21.006714  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:21.008737  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1115 09:09:21.313561  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:21.316891  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:21.506867  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:21.507110  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:21.817023  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:22.006692  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:22.007015  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:22.317039  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:22.507264  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:22.507535  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:22.817760  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:23.006282  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:23.006691  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:23.316977  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:23.506252  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:23.506764  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1115 09:09:23.813750  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:23.817224  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:23.980247  360443 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1115 09:09:23.980313  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:23.998193  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:24.007171  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:24.007805  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:24.099375  360443 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1115 09:09:24.112379  360443 addons.go:239] Setting addon gcp-auth=true in "addons-454747"
	I1115 09:09:24.112499  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:24.113001  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:24.130876  360443 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1115 09:09:24.130925  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:24.148278  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:24.240555  360443 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:09:24.242151  360443 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1115 09:09:24.243254  360443 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1115 09:09:24.243270  360443 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1115 09:09:24.257318  360443 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1115 09:09:24.257348  360443 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1115 09:09:24.271231  360443 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:09:24.271259  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1115 09:09:24.284687  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:09:24.317030  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:24.506197  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:24.506897  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:24.602017  360443 addons.go:480] Verifying addon gcp-auth=true in "addons-454747"
	I1115 09:09:24.603728  360443 out.go:179] * Verifying gcp-auth addon...
	I1115 09:09:24.605930  360443 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1115 09:09:24.608187  360443 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1115 09:09:24.608202  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:24.816317  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:25.006092  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:25.006836  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:25.109419  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:25.316317  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:25.506136  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:25.506853  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:25.609558  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:25.816808  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:26.007042  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:26.007151  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:26.109074  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:26.312995  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:26.316366  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:26.506586  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:26.507452  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:26.609249  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:26.816438  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:27.006650  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:27.007402  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:27.109449  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:27.316798  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:27.507159  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:27.507166  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:27.609676  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:27.817178  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:28.006320  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:28.006934  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:28.109686  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:28.313471  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:28.316852  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:28.507200  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:28.507387  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:28.609191  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:28.816433  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:29.006434  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:29.007238  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:29.109285  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:29.317121  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:29.506257  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:29.506889  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:29.609664  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:29.817072  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:30.006233  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:30.006690  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:30.109743  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:30.313966  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:30.316327  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:30.506568  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:30.507071  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:30.609311  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:30.816653  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:31.006905  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:31.007143  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:31.109990  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:31.316476  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:31.507979  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:31.508052  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:31.609040  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:31.816945  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:32.005951  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:32.006803  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:32.109741  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:32.317339  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:32.506349  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:32.506976  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:32.608801  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:32.813610  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:32.816916  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:33.005858  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:33.007471  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:33.109286  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:33.316344  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:33.506032  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:33.507030  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:33.608735  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:33.816598  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:34.006410  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:34.007114  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:34.108824  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:34.317002  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:34.505787  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:34.506565  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:34.609187  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:34.816919  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:35.005724  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:35.007536  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:35.109705  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:35.313492  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:35.316630  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:35.506838  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:35.507235  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:35.609436  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:35.816872  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:36.006771  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:36.006850  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:36.109582  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:36.316773  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:36.506288  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:36.506743  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:36.609712  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:36.816904  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:37.005767  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:37.007367  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:37.109190  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:37.316085  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:37.506076  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:37.506924  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:37.609629  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:37.813593  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:37.816903  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:38.006769  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:38.006970  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:38.109628  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:38.316734  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:38.508960  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:38.509132  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:38.608643  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:38.816937  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:39.005923  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:39.007645  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:39.109315  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:39.316440  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:39.506504  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:39.507250  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:39.609032  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:39.817119  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:40.005832  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:40.006753  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:40.109611  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:40.313455  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:40.316833  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:40.506977  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:40.506993  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:40.608951  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:40.816106  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:41.005820  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:41.006585  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:41.109572  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:41.317530  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:41.506820  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:41.507223  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:41.609428  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:41.817141  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:42.005880  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:42.006691  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:42.109665  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:42.313531  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:42.316656  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:42.506907  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:42.507439  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:42.609111  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:42.816561  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:43.006517  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:43.007217  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:43.109338  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:43.316288  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:43.506369  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:43.506989  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:43.609789  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:43.816718  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:44.006580  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:44.007628  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:44.109369  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:44.316273  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:44.506182  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:44.507103  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:44.608504  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:44.813182  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:44.816191  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:45.006135  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:45.006959  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:45.109523  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:45.316517  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:45.506387  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:45.507235  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:45.609159  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:45.816929  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:46.005866  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:46.006326  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:46.109005  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:46.316378  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:46.506274  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:46.507205  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:46.609293  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:46.817013  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:47.005906  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:47.006792  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:47.109560  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:47.313360  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:47.316420  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:47.506518  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:47.506990  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:47.609586  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:47.816594  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:48.006387  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:48.007196  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:48.109087  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:48.316619  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:48.505935  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:48.506921  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:48.609853  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:48.816441  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:49.006331  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:49.007261  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:49.109468  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:49.316423  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:49.506278  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:49.506933  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:49.609581  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:49.812998  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:49.816254  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:50.006117  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:50.006983  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:50.108841  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:50.316918  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:50.505963  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:50.507783  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:50.609752  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:50.816874  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:51.005647  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:51.007362  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:51.109090  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:51.316847  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:51.507085  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:51.507252  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:51.608900  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:51.813711  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:51.816920  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:52.006037  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:52.006707  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:52.109595  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:52.317120  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:52.505897  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:52.506868  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:52.609551  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:52.816550  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:53.006554  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:53.007165  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:53.108796  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:53.317050  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:53.506198  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:53.506996  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:53.609577  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:53.816361  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:54.006424  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:54.007035  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:54.109902  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:54.313816  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:54.317104  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:54.506427  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:54.506844  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:54.610043  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:54.816129  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:55.006150  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:55.007073  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:55.108755  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:55.316868  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:55.505842  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:55.507299  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:55.609593  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:55.816813  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:56.006676  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:56.006877  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:56.109700  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:56.316756  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:56.505862  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:56.507657  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:56.609564  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:56.813409  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:56.816565  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:57.006606  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:57.007363  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:57.109248  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:57.316387  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:57.506626  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:57.507039  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:57.608877  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:57.813101  360443 node_ready.go:49] node "addons-454747" is "Ready"
	I1115 09:09:57.813140  360443 node_ready.go:38] duration metric: took 41.003062283s for node "addons-454747" to be "Ready" ...
	I1115 09:09:57.813160  360443 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:09:57.813243  360443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:09:57.817316  360443 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 09:09:57.817345  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:57.830296  360443 api_server.go:72] duration metric: took 41.540079431s to wait for apiserver process to appear ...
	I1115 09:09:57.830324  360443 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:09:57.830353  360443 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 09:09:57.834714  360443 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 09:09:57.835754  360443 api_server.go:141] control plane version: v1.34.1
	I1115 09:09:57.835785  360443 api_server.go:131] duration metric: took 5.452451ms to wait for apiserver health ...
	I1115 09:09:57.835798  360443 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:09:57.841339  360443 system_pods.go:59] 20 kube-system pods found
	I1115 09:09:57.841379  360443 system_pods.go:61] "amd-gpu-device-plugin-z8k7m" [8cc4171b-54ae-4353-9ac3-b8f4de94b486] Pending
	I1115 09:09:57.841432  360443 system_pods.go:61] "coredns-66bc5c9577-cjxcs" [5e1520e6-262d-4791-8a6c-02723fd2722f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:09:57.841449  360443 system_pods.go:61] "csi-hostpath-attacher-0" [6698b44f-d001-4c25-b60f-09940dcb56c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:09:57.841462  360443 system_pods.go:61] "csi-hostpath-resizer-0" [875fe603-0fa1-4bee-b391-4ae10fe0542a] Pending
	I1115 09:09:57.841468  360443 system_pods.go:61] "csi-hostpathplugin-zkcmq" [ce167230-ac85-431a-acf8-3a672b1aa5ba] Pending
	I1115 09:09:57.841476  360443 system_pods.go:61] "etcd-addons-454747" [d0759de5-4799-4c33-82cb-2e3031947785] Running
	I1115 09:09:57.841480  360443 system_pods.go:61] "kindnet-wq26q" [11f8d927-49fd-4232-8c9f-96bccb76673a] Running
	I1115 09:09:57.841486  360443 system_pods.go:61] "kube-apiserver-addons-454747" [d7bf8535-2d7a-40fa-a045-1f51fe7e98f5] Running
	I1115 09:09:57.841494  360443 system_pods.go:61] "kube-controller-manager-addons-454747" [99633a87-dd53-4d17-a16c-319c7424f0db] Running
	I1115 09:09:57.841503  360443 system_pods.go:61] "kube-ingress-dns-minikube" [c7585e9f-c4af-4c2a-af6b-13c2612f3939] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:09:57.841512  360443 system_pods.go:61] "kube-proxy-jlh5q" [9e8210a5-1357-4e4a-902a-93a4801e0509] Running
	I1115 09:09:57.841517  360443 system_pods.go:61] "kube-scheduler-addons-454747" [b2b440de-ce6f-4202-aec3-7b2c9a9e5b60] Running
	I1115 09:09:57.841529  360443 system_pods.go:61] "metrics-server-85b7d694d7-m85dj" [0cd080a3-9d1c-497f-8366-db37bda2a923] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:09:57.841538  360443 system_pods.go:61] "nvidia-device-plugin-daemonset-58w8g" [074fe19e-299a-47d4-b11d-39059b797509] Pending
	I1115 09:09:57.841546  360443 system_pods.go:61] "registry-6b586f9694-mqjdw" [7ed7e9cf-6050-4f40-b957-f78707890861] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:09:57.841554  360443 system_pods.go:61] "registry-creds-764b6fb674-gckbr" [799c7fb7-4643-4a6c-ad1f-e02d10f99902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:09:57.841564  360443 system_pods.go:61] "registry-proxy-pspnm" [4fe4b793-40d0-4349-955b-fce89850d82b] Pending
	I1115 09:09:57.841570  360443 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nwkcn" [366e261d-64fb-4867-a32c-9e4a4b404a31] Pending
	I1115 09:09:57.841575  360443 system_pods.go:61] "snapshot-controller-7d9fbc56b8-t9lwf" [4eb66a49-c31b-4612-bb18-66f0769762fe] Pending
	I1115 09:09:57.841583  360443 system_pods.go:61] "storage-provisioner" [1b40db86-a278-4988-8866-14d72b2d608a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:09:57.841592  360443 system_pods.go:74] duration metric: took 5.786396ms to wait for pod list to return data ...
	I1115 09:09:57.841603  360443 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:09:57.844315  360443 default_sa.go:45] found service account: "default"
	I1115 09:09:57.844338  360443 default_sa.go:55] duration metric: took 2.726797ms for default service account to be created ...
	I1115 09:09:57.844349  360443 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:09:57.847705  360443 system_pods.go:86] 20 kube-system pods found
	I1115 09:09:57.847734  360443 system_pods.go:89] "amd-gpu-device-plugin-z8k7m" [8cc4171b-54ae-4353-9ac3-b8f4de94b486] Pending
	I1115 09:09:57.847751  360443 system_pods.go:89] "coredns-66bc5c9577-cjxcs" [5e1520e6-262d-4791-8a6c-02723fd2722f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:09:57.847760  360443 system_pods.go:89] "csi-hostpath-attacher-0" [6698b44f-d001-4c25-b60f-09940dcb56c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:09:57.847769  360443 system_pods.go:89] "csi-hostpath-resizer-0" [875fe603-0fa1-4bee-b391-4ae10fe0542a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:09:57.847779  360443 system_pods.go:89] "csi-hostpathplugin-zkcmq" [ce167230-ac85-431a-acf8-3a672b1aa5ba] Pending
	I1115 09:09:57.847785  360443 system_pods.go:89] "etcd-addons-454747" [d0759de5-4799-4c33-82cb-2e3031947785] Running
	I1115 09:09:57.847791  360443 system_pods.go:89] "kindnet-wq26q" [11f8d927-49fd-4232-8c9f-96bccb76673a] Running
	I1115 09:09:57.847800  360443 system_pods.go:89] "kube-apiserver-addons-454747" [d7bf8535-2d7a-40fa-a045-1f51fe7e98f5] Running
	I1115 09:09:57.847805  360443 system_pods.go:89] "kube-controller-manager-addons-454747" [99633a87-dd53-4d17-a16c-319c7424f0db] Running
	I1115 09:09:57.847816  360443 system_pods.go:89] "kube-ingress-dns-minikube" [c7585e9f-c4af-4c2a-af6b-13c2612f3939] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:09:57.847825  360443 system_pods.go:89] "kube-proxy-jlh5q" [9e8210a5-1357-4e4a-902a-93a4801e0509] Running
	I1115 09:09:57.847831  360443 system_pods.go:89] "kube-scheduler-addons-454747" [b2b440de-ce6f-4202-aec3-7b2c9a9e5b60] Running
	I1115 09:09:57.847840  360443 system_pods.go:89] "metrics-server-85b7d694d7-m85dj" [0cd080a3-9d1c-497f-8366-db37bda2a923] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:09:57.847845  360443 system_pods.go:89] "nvidia-device-plugin-daemonset-58w8g" [074fe19e-299a-47d4-b11d-39059b797509] Pending
	I1115 09:09:57.847852  360443 system_pods.go:89] "registry-6b586f9694-mqjdw" [7ed7e9cf-6050-4f40-b957-f78707890861] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:09:57.847864  360443 system_pods.go:89] "registry-creds-764b6fb674-gckbr" [799c7fb7-4643-4a6c-ad1f-e02d10f99902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:09:57.847871  360443 system_pods.go:89] "registry-proxy-pspnm" [4fe4b793-40d0-4349-955b-fce89850d82b] Pending
	I1115 09:09:57.847880  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nwkcn" [366e261d-64fb-4867-a32c-9e4a4b404a31] Pending
	I1115 09:09:57.847887  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t9lwf" [4eb66a49-c31b-4612-bb18-66f0769762fe] Pending
	I1115 09:09:57.847897  360443 system_pods.go:89] "storage-provisioner" [1b40db86-a278-4988-8866-14d72b2d608a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:09:57.847918  360443 retry.go:31] will retry after 225.081309ms: missing components: kube-dns
	I1115 09:09:58.005901  360443 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 09:09:58.005926  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:58.006898  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:58.078759  360443 system_pods.go:86] 20 kube-system pods found
	I1115 09:09:58.078806  360443 system_pods.go:89] "amd-gpu-device-plugin-z8k7m" [8cc4171b-54ae-4353-9ac3-b8f4de94b486] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:09:58.078820  360443 system_pods.go:89] "coredns-66bc5c9577-cjxcs" [5e1520e6-262d-4791-8a6c-02723fd2722f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:09:58.078830  360443 system_pods.go:89] "csi-hostpath-attacher-0" [6698b44f-d001-4c25-b60f-09940dcb56c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:09:58.078839  360443 system_pods.go:89] "csi-hostpath-resizer-0" [875fe603-0fa1-4bee-b391-4ae10fe0542a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:09:58.078854  360443 system_pods.go:89] "csi-hostpathplugin-zkcmq" [ce167230-ac85-431a-acf8-3a672b1aa5ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:09:58.078862  360443 system_pods.go:89] "etcd-addons-454747" [d0759de5-4799-4c33-82cb-2e3031947785] Running
	I1115 09:09:58.078869  360443 system_pods.go:89] "kindnet-wq26q" [11f8d927-49fd-4232-8c9f-96bccb76673a] Running
	I1115 09:09:58.078874  360443 system_pods.go:89] "kube-apiserver-addons-454747" [d7bf8535-2d7a-40fa-a045-1f51fe7e98f5] Running
	I1115 09:09:58.078880  360443 system_pods.go:89] "kube-controller-manager-addons-454747" [99633a87-dd53-4d17-a16c-319c7424f0db] Running
	I1115 09:09:58.078888  360443 system_pods.go:89] "kube-ingress-dns-minikube" [c7585e9f-c4af-4c2a-af6b-13c2612f3939] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:09:58.078903  360443 system_pods.go:89] "kube-proxy-jlh5q" [9e8210a5-1357-4e4a-902a-93a4801e0509] Running
	I1115 09:09:58.078910  360443 system_pods.go:89] "kube-scheduler-addons-454747" [b2b440de-ce6f-4202-aec3-7b2c9a9e5b60] Running
	I1115 09:09:58.078917  360443 system_pods.go:89] "metrics-server-85b7d694d7-m85dj" [0cd080a3-9d1c-497f-8366-db37bda2a923] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:09:58.078931  360443 system_pods.go:89] "nvidia-device-plugin-daemonset-58w8g" [074fe19e-299a-47d4-b11d-39059b797509] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:09:58.078942  360443 system_pods.go:89] "registry-6b586f9694-mqjdw" [7ed7e9cf-6050-4f40-b957-f78707890861] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:09:58.078956  360443 system_pods.go:89] "registry-creds-764b6fb674-gckbr" [799c7fb7-4643-4a6c-ad1f-e02d10f99902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:09:58.078969  360443 system_pods.go:89] "registry-proxy-pspnm" [4fe4b793-40d0-4349-955b-fce89850d82b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:09:58.078984  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nwkcn" [366e261d-64fb-4867-a32c-9e4a4b404a31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:09:58.078996  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t9lwf" [4eb66a49-c31b-4612-bb18-66f0769762fe] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:09:58.079083  360443 system_pods.go:89] "storage-provisioner" [1b40db86-a278-4988-8866-14d72b2d608a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:09:58.079110  360443 retry.go:31] will retry after 313.960058ms: missing components: kube-dns
	I1115 09:09:58.177164  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:58.317589  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:58.397501  360443 system_pods.go:86] 20 kube-system pods found
	I1115 09:09:58.397542  360443 system_pods.go:89] "amd-gpu-device-plugin-z8k7m" [8cc4171b-54ae-4353-9ac3-b8f4de94b486] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:09:58.397553  360443 system_pods.go:89] "coredns-66bc5c9577-cjxcs" [5e1520e6-262d-4791-8a6c-02723fd2722f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:09:58.397561  360443 system_pods.go:89] "csi-hostpath-attacher-0" [6698b44f-d001-4c25-b60f-09940dcb56c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:09:58.397568  360443 system_pods.go:89] "csi-hostpath-resizer-0" [875fe603-0fa1-4bee-b391-4ae10fe0542a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:09:58.397577  360443 system_pods.go:89] "csi-hostpathplugin-zkcmq" [ce167230-ac85-431a-acf8-3a672b1aa5ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:09:58.397581  360443 system_pods.go:89] "etcd-addons-454747" [d0759de5-4799-4c33-82cb-2e3031947785] Running
	I1115 09:09:58.397586  360443 system_pods.go:89] "kindnet-wq26q" [11f8d927-49fd-4232-8c9f-96bccb76673a] Running
	I1115 09:09:58.397589  360443 system_pods.go:89] "kube-apiserver-addons-454747" [d7bf8535-2d7a-40fa-a045-1f51fe7e98f5] Running
	I1115 09:09:58.397593  360443 system_pods.go:89] "kube-controller-manager-addons-454747" [99633a87-dd53-4d17-a16c-319c7424f0db] Running
	I1115 09:09:58.397598  360443 system_pods.go:89] "kube-ingress-dns-minikube" [c7585e9f-c4af-4c2a-af6b-13c2612f3939] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:09:58.397604  360443 system_pods.go:89] "kube-proxy-jlh5q" [9e8210a5-1357-4e4a-902a-93a4801e0509] Running
	I1115 09:09:58.397609  360443 system_pods.go:89] "kube-scheduler-addons-454747" [b2b440de-ce6f-4202-aec3-7b2c9a9e5b60] Running
	I1115 09:09:58.397616  360443 system_pods.go:89] "metrics-server-85b7d694d7-m85dj" [0cd080a3-9d1c-497f-8366-db37bda2a923] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:09:58.397622  360443 system_pods.go:89] "nvidia-device-plugin-daemonset-58w8g" [074fe19e-299a-47d4-b11d-39059b797509] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:09:58.397627  360443 system_pods.go:89] "registry-6b586f9694-mqjdw" [7ed7e9cf-6050-4f40-b957-f78707890861] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:09:58.397633  360443 system_pods.go:89] "registry-creds-764b6fb674-gckbr" [799c7fb7-4643-4a6c-ad1f-e02d10f99902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:09:58.397637  360443 system_pods.go:89] "registry-proxy-pspnm" [4fe4b793-40d0-4349-955b-fce89850d82b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:09:58.397642  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nwkcn" [366e261d-64fb-4867-a32c-9e4a4b404a31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:09:58.397651  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t9lwf" [4eb66a49-c31b-4612-bb18-66f0769762fe] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:09:58.397659  360443 system_pods.go:89] "storage-provisioner" [1b40db86-a278-4988-8866-14d72b2d608a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:09:58.397676  360443 retry.go:31] will retry after 447.659541ms: missing components: kube-dns
	I1115 09:09:58.507250  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:58.507388  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:58.609694  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:58.818266  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:58.850441  360443 system_pods.go:86] 20 kube-system pods found
	I1115 09:09:58.850481  360443 system_pods.go:89] "amd-gpu-device-plugin-z8k7m" [8cc4171b-54ae-4353-9ac3-b8f4de94b486] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:09:58.850489  360443 system_pods.go:89] "coredns-66bc5c9577-cjxcs" [5e1520e6-262d-4791-8a6c-02723fd2722f] Running
	I1115 09:09:58.850497  360443 system_pods.go:89] "csi-hostpath-attacher-0" [6698b44f-d001-4c25-b60f-09940dcb56c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:09:58.850502  360443 system_pods.go:89] "csi-hostpath-resizer-0" [875fe603-0fa1-4bee-b391-4ae10fe0542a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:09:58.850508  360443 system_pods.go:89] "csi-hostpathplugin-zkcmq" [ce167230-ac85-431a-acf8-3a672b1aa5ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:09:58.850512  360443 system_pods.go:89] "etcd-addons-454747" [d0759de5-4799-4c33-82cb-2e3031947785] Running
	I1115 09:09:58.850516  360443 system_pods.go:89] "kindnet-wq26q" [11f8d927-49fd-4232-8c9f-96bccb76673a] Running
	I1115 09:09:58.850520  360443 system_pods.go:89] "kube-apiserver-addons-454747" [d7bf8535-2d7a-40fa-a045-1f51fe7e98f5] Running
	I1115 09:09:58.850525  360443 system_pods.go:89] "kube-controller-manager-addons-454747" [99633a87-dd53-4d17-a16c-319c7424f0db] Running
	I1115 09:09:58.850533  360443 system_pods.go:89] "kube-ingress-dns-minikube" [c7585e9f-c4af-4c2a-af6b-13c2612f3939] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:09:58.850538  360443 system_pods.go:89] "kube-proxy-jlh5q" [9e8210a5-1357-4e4a-902a-93a4801e0509] Running
	I1115 09:09:58.850551  360443 system_pods.go:89] "kube-scheduler-addons-454747" [b2b440de-ce6f-4202-aec3-7b2c9a9e5b60] Running
	I1115 09:09:58.850560  360443 system_pods.go:89] "metrics-server-85b7d694d7-m85dj" [0cd080a3-9d1c-497f-8366-db37bda2a923] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:09:58.850568  360443 system_pods.go:89] "nvidia-device-plugin-daemonset-58w8g" [074fe19e-299a-47d4-b11d-39059b797509] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:09:58.850582  360443 system_pods.go:89] "registry-6b586f9694-mqjdw" [7ed7e9cf-6050-4f40-b957-f78707890861] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:09:58.850591  360443 system_pods.go:89] "registry-creds-764b6fb674-gckbr" [799c7fb7-4643-4a6c-ad1f-e02d10f99902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:09:58.850599  360443 system_pods.go:89] "registry-proxy-pspnm" [4fe4b793-40d0-4349-955b-fce89850d82b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:09:58.850609  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nwkcn" [366e261d-64fb-4867-a32c-9e4a4b404a31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:09:58.850617  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t9lwf" [4eb66a49-c31b-4612-bb18-66f0769762fe] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:09:58.850623  360443 system_pods.go:89] "storage-provisioner" [1b40db86-a278-4988-8866-14d72b2d608a] Running
	I1115 09:09:58.850631  360443 system_pods.go:126] duration metric: took 1.006277333s to wait for k8s-apps to be running ...
	I1115 09:09:58.850640  360443 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:09:58.850688  360443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:09:58.864101  360443 system_svc.go:56] duration metric: took 13.450668ms WaitForService to wait for kubelet
	I1115 09:09:58.864128  360443 kubeadm.go:587] duration metric: took 42.573922418s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:09:58.864144  360443 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:09:58.867050  360443 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:09:58.867077  360443 node_conditions.go:123] node cpu capacity is 8
	I1115 09:09:58.867091  360443 node_conditions.go:105] duration metric: took 2.942859ms to run NodePressure ...
	I1115 09:09:58.867106  360443 start.go:242] waiting for startup goroutines ...
	I1115 09:09:59.006048  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:59.006651  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:59.109695  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:59.317691  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:59.508016  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:59.508272  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:59.612040  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:59.818027  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:00.008595  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:00.008912  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:00.110117  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:00.318170  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:00.506251  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:00.506747  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:00.609930  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:00.817681  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:01.007208  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:01.007700  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:01.109987  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:01.317038  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:01.507825  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:01.507888  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:01.610558  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:01.818300  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:02.007613  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:02.007643  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:02.109634  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:02.318826  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:02.507231  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:02.507345  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:02.609762  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:02.818367  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:03.006813  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:03.007079  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:03.109935  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:03.317203  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:03.506090  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:03.506500  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:03.609754  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:03.817798  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:04.007159  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:04.007551  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:04.109945  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:04.317584  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:04.506862  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:04.507546  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:04.609736  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:04.818815  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:05.006806  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:05.006936  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:05.110199  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:05.317257  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:05.506787  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:05.507130  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:05.609577  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:05.817654  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:06.006965  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:06.007600  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:06.109196  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:06.318575  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:06.506847  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:06.507472  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:06.609519  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:06.817317  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:07.006744  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:07.007026  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:07.108690  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:07.317947  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:07.507054  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:07.507078  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:07.609406  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:07.817225  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:08.006562  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:08.006925  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:08.109788  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:08.317376  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:08.506652  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:08.507187  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:08.609464  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:08.818446  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:09.006290  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:09.007280  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:09.109512  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:09.318024  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:09.507197  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:09.507436  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:09.609562  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:09.820454  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:10.007190  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:10.007838  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:10.109895  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:10.318074  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:10.507111  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:10.507205  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:10.609246  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:10.817738  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:11.006673  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:11.007515  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:11.110416  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:11.318057  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:11.506659  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:11.506819  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:11.609894  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:11.817029  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:12.007317  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:12.007372  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:12.109190  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:12.317531  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:12.506682  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:12.507315  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:12.609087  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:12.817852  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:13.007213  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:13.007366  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:13.109260  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:13.317790  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:13.506541  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:13.507231  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:13.609245  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:13.817549  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:14.007093  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:14.007151  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:14.109303  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:14.318105  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:14.506783  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:14.507208  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:14.609438  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:14.843897  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:15.007726  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:15.007770  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:15.109766  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:15.318191  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:15.506688  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:15.507742  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:15.610283  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:15.817575  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:16.006577  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:16.006996  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:16.109684  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:16.318303  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:16.506565  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:16.507362  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:16.609232  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:16.817681  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:17.007236  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:17.008614  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:17.110284  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:17.317799  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:17.506931  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:17.507566  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:17.609792  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:17.817668  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:18.006819  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:18.007451  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:18.109681  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:18.318543  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:18.506311  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:18.507048  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:18.609845  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:18.817503  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:19.007322  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:19.007436  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:19.109221  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:19.317954  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:19.507048  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:19.507092  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:19.610326  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:19.818481  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:20.006721  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:20.007361  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:20.109587  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:20.317697  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:20.506625  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:20.507339  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:20.609064  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:20.817188  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:21.005822  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:21.006478  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:21.109295  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:21.317367  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:21.506484  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:21.507000  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:21.609662  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:21.818416  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:22.006862  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:22.007369  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:22.109195  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:22.318253  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:22.506572  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:22.506838  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:22.609442  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:22.818420  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:23.007351  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:23.007549  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:23.109671  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:23.318354  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:23.506326  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:23.506831  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:23.609417  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:23.817835  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:24.007100  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:24.007126  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:24.109443  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:24.318267  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:24.506552  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:24.506837  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:24.610107  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:24.817894  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:25.007836  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:25.007972  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:25.109084  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:25.317769  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:25.506778  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:25.507271  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:25.609034  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:25.817176  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:26.006512  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:26.006888  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:26.109843  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:26.316895  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:26.507931  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:26.507985  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:26.610018  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:26.817475  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:27.006978  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:27.007143  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:27.108513  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:27.317921  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:27.507111  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:27.507111  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:27.608822  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:27.817013  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:28.006683  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:28.007861  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:28.109482  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:28.317601  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:28.507898  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:28.510584  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:28.610312  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:28.818052  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:29.007229  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:29.007540  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:29.110007  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:29.317105  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:29.505778  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:29.507824  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:29.609865  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:29.817193  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:30.006723  360443 kapi.go:107] duration metric: took 1m12.003785212s to wait for kubernetes.io/minikube-addons=registry ...
	I1115 09:10:30.007052  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:30.109972  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:30.318412  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:30.507932  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:30.610018  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:30.817631  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:31.008426  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:31.109794  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:31.318349  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:31.507744  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:31.609617  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:31.817737  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:32.007292  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:32.109234  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:32.317783  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:32.507645  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:32.609114  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:32.817049  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:33.008008  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:33.109721  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:33.318038  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:33.507907  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:33.609681  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:33.817935  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:34.007773  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:34.109758  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:34.318180  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:34.509241  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:34.608853  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:34.817134  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:35.008535  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:35.110080  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:35.317180  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:35.508319  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:35.609071  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:35.818110  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:36.009355  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:36.116049  360443 kapi.go:107] duration metric: took 1m11.510114662s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1115 09:10:36.117773  360443 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-454747 cluster.
	I1115 09:10:36.119035  360443 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1115 09:10:36.120325  360443 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1115 09:10:36.318976  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:36.508624  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:36.818301  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:37.008235  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:37.318368  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:37.528141  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:37.816817  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:38.008134  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:38.317326  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:38.508020  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:38.817663  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:39.007367  360443 kapi.go:107] duration metric: took 1m21.003225168s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1115 09:10:39.317905  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:39.817862  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:40.317743  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:40.821070  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:41.318158  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:41.818413  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:42.317829  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:42.818411  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:43.316902  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:43.817715  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:44.317588  360443 kapi.go:107] duration metric: took 1m26.003759509s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1115 09:10:44.319488  360443 out.go:179] * Enabled addons: cloud-spanner, registry-creds, ingress-dns, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1115 09:10:44.320693  360443 addons.go:515] duration metric: took 1m28.030447552s for enable addons: enabled=[cloud-spanner registry-creds ingress-dns amd-gpu-device-plugin nvidia-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher inspektor-gadget default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1115 09:10:44.320733  360443 start.go:247] waiting for cluster config update ...
	I1115 09:10:44.320756  360443 start.go:256] writing updated cluster config ...
	I1115 09:10:44.321030  360443 ssh_runner.go:195] Run: rm -f paused
	I1115 09:10:44.325050  360443 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:10:44.328332  360443 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cjxcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.332380  360443 pod_ready.go:94] pod "coredns-66bc5c9577-cjxcs" is "Ready"
	I1115 09:10:44.332426  360443 pod_ready.go:86] duration metric: took 4.072001ms for pod "coredns-66bc5c9577-cjxcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.334333  360443 pod_ready.go:83] waiting for pod "etcd-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.337901  360443 pod_ready.go:94] pod "etcd-addons-454747" is "Ready"
	I1115 09:10:44.337921  360443 pod_ready.go:86] duration metric: took 3.555974ms for pod "etcd-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.339625  360443 pod_ready.go:83] waiting for pod "kube-apiserver-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.342939  360443 pod_ready.go:94] pod "kube-apiserver-addons-454747" is "Ready"
	I1115 09:10:44.342959  360443 pod_ready.go:86] duration metric: took 3.313237ms for pod "kube-apiserver-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.344758  360443 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.728966  360443 pod_ready.go:94] pod "kube-controller-manager-addons-454747" is "Ready"
	I1115 09:10:44.728994  360443 pod_ready.go:86] duration metric: took 384.215389ms for pod "kube-controller-manager-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.929612  360443 pod_ready.go:83] waiting for pod "kube-proxy-jlh5q" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:45.329146  360443 pod_ready.go:94] pod "kube-proxy-jlh5q" is "Ready"
	I1115 09:10:45.329175  360443 pod_ready.go:86] duration metric: took 399.53063ms for pod "kube-proxy-jlh5q" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:45.529709  360443 pod_ready.go:83] waiting for pod "kube-scheduler-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:45.928984  360443 pod_ready.go:94] pod "kube-scheduler-addons-454747" is "Ready"
	I1115 09:10:45.929017  360443 pod_ready.go:86] duration metric: took 399.279192ms for pod "kube-scheduler-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:45.929032  360443 pod_ready.go:40] duration metric: took 1.603950365s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:10:45.974999  360443 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 09:10:45.976892  360443 out.go:179] * Done! kubectl is now configured to use "addons-454747" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 09:12:12 addons-454747 crio[775]: time="2025-11-15T09:12:12.664772619Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=67afbdab-7811-45b2-b7dd-e4fb2d91a126 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:12:12 addons-454747 crio[775]: time="2025-11-15T09:12:12.673843512Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Nov 15 09:12:14 addons-454747 crio[775]: time="2025-11-15T09:12:14.17535797Z" level=info msg="Pulled image: docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=67afbdab-7811-45b2-b7dd-e4fb2d91a126 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:12:14 addons-454747 crio[775]: time="2025-11-15T09:12:14.176007527Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=8ca44aa8-152e-4701-8f81-8c3c47a85138 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:12:14 addons-454747 crio[775]: time="2025-11-15T09:12:14.208880091Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=686948d7-6a3e-4d17-97d0-d824af6919ec name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:12:14 addons-454747 crio[775]: time="2025-11-15T09:12:14.212883029Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-gckbr/registry-creds" id=3117a9a3-ecd9-48b5-8e56-de383f909f9c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:12:14 addons-454747 crio[775]: time="2025-11-15T09:12:14.213035051Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:12:14 addons-454747 crio[775]: time="2025-11-15T09:12:14.219922326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:12:14 addons-454747 crio[775]: time="2025-11-15T09:12:14.22037428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:12:14 addons-454747 crio[775]: time="2025-11-15T09:12:14.261519257Z" level=info msg="Created container a576f01824b0718f09b2dec682f537bf82e9348d66322e53f9569bd136126ff8: kube-system/registry-creds-764b6fb674-gckbr/registry-creds" id=3117a9a3-ecd9-48b5-8e56-de383f909f9c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:12:14 addons-454747 crio[775]: time="2025-11-15T09:12:14.262185262Z" level=info msg="Starting container: a576f01824b0718f09b2dec682f537bf82e9348d66322e53f9569bd136126ff8" id=c256e691-ee2e-4872-b9d9-937a38c168ab name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:12:14 addons-454747 crio[775]: time="2025-11-15T09:12:14.264334735Z" level=info msg="Started container" PID=8926 containerID=a576f01824b0718f09b2dec682f537bf82e9348d66322e53f9569bd136126ff8 description=kube-system/registry-creds-764b6fb674-gckbr/registry-creds id=c256e691-ee2e-4872-b9d9-937a38c168ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c14b5a127fe193fcf62ede10951c9045b9c6fb8b08c358175d1326b585be32c
	Nov 15 09:13:39 addons-454747 crio[775]: time="2025-11-15T09:13:39.973522794Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-b64zs/POD" id=d5497cad-b77a-4ced-b7af-b7fd2b8ea68c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 09:13:39 addons-454747 crio[775]: time="2025-11-15T09:13:39.973616809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:13:39 addons-454747 crio[775]: time="2025-11-15T09:13:39.981127203Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-b64zs Namespace:default ID:c352a3a5af9ed88f36c783548353f2d9d93e99fd349826f47ab3941b6340ba2e UID:840010fb-687d-4417-9bef-a3f896d49a18 NetNS:/var/run/netns/351ddac7-8ae6-4341-8112-ee2929438ebe Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132f98}] Aliases:map[]}"
	Nov 15 09:13:39 addons-454747 crio[775]: time="2025-11-15T09:13:39.981165833Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-b64zs to CNI network \"kindnet\" (type=ptp)"
	Nov 15 09:13:39 addons-454747 crio[775]: time="2025-11-15T09:13:39.991986299Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-b64zs Namespace:default ID:c352a3a5af9ed88f36c783548353f2d9d93e99fd349826f47ab3941b6340ba2e UID:840010fb-687d-4417-9bef-a3f896d49a18 NetNS:/var/run/netns/351ddac7-8ae6-4341-8112-ee2929438ebe Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132f98}] Aliases:map[]}"
	Nov 15 09:13:39 addons-454747 crio[775]: time="2025-11-15T09:13:39.9921349Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-b64zs for CNI network kindnet (type=ptp)"
	Nov 15 09:13:39 addons-454747 crio[775]: time="2025-11-15T09:13:39.993015386Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 15 09:13:39 addons-454747 crio[775]: time="2025-11-15T09:13:39.993877844Z" level=info msg="Ran pod sandbox c352a3a5af9ed88f36c783548353f2d9d93e99fd349826f47ab3941b6340ba2e with infra container: default/hello-world-app-5d498dc89-b64zs/POD" id=d5497cad-b77a-4ced-b7af-b7fd2b8ea68c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 09:13:39 addons-454747 crio[775]: time="2025-11-15T09:13:39.995211919Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=8c1ded88-52c8-4f3b-9458-ac2fbbe46bfa name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:13:39 addons-454747 crio[775]: time="2025-11-15T09:13:39.995388174Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=8c1ded88-52c8-4f3b-9458-ac2fbbe46bfa name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:13:39 addons-454747 crio[775]: time="2025-11-15T09:13:39.995454689Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=8c1ded88-52c8-4f3b-9458-ac2fbbe46bfa name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:13:39 addons-454747 crio[775]: time="2025-11-15T09:13:39.996293781Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=ce05afe6-f1dc-421a-a7f2-665b1a7825b9 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:13:40 addons-454747 crio[775]: time="2025-11-15T09:13:40.001053675Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	a576f01824b07       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   4c14b5a127fe1       registry-creds-764b6fb674-gckbr            kube-system
	407d93404914b       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago        Running             nginx                                    0                   ed978a99b5b34       nginx                                      default
	152bfae953f10       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   96196cf71c8c2       busybox                                    default
	a113ced30ad2c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   387984f9264d2       csi-hostpathplugin-zkcmq                   kube-system
	15b0038a933d3       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   387984f9264d2       csi-hostpathplugin-zkcmq                   kube-system
	32d50218303b0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   387984f9264d2       csi-hostpathplugin-zkcmq                   kube-system
	9585fc97c2461       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago        Running             hostpath                                 0                   387984f9264d2       csi-hostpathplugin-zkcmq                   kube-system
	7e1f8d44b44d4       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             3 minutes ago        Running             controller                               0                   7560370900d1a       ingress-nginx-controller-6c8bf45fb-vhvjt   ingress-nginx
	ca82589ebc097       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago        Running             gcp-auth                                 0                   0c1a9f81077e2       gcp-auth-78565c9fb4-gtlhb                  gcp-auth
	29cde6adf092c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   387984f9264d2       csi-hostpathplugin-zkcmq                   kube-system
	7a9c917944476       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            3 minutes ago        Running             gadget                                   0                   02e394c36fb8f       gadget-5lh8b                               gadget
	c7e613941608e       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   220245d49113e       registry-proxy-pspnm                       kube-system
	1fb29add2d5a8       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   ee65cc22eb39e       amd-gpu-device-plugin-z8k7m                kube-system
	f093743456ae5       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   387984f9264d2       csi-hostpathplugin-zkcmq                   kube-system
	9a64b60b839d5       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   3221f2c6eeda5       nvidia-device-plugin-daemonset-58w8g       kube-system
	d318e1e5a03be       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   ec29b3db79937       snapshot-controller-7d9fbc56b8-t9lwf       kube-system
	39e0b3ce59231       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   b40a0896bbc48       csi-hostpath-resizer-0                     kube-system
	c5cc58d4c65e1       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   b7c2cf2351e75       yakd-dashboard-5ff678cb9-lzndj             yakd-dashboard
	dd10873e5c8f4       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   eab6795f7d68d       csi-hostpath-attacher-0                    kube-system
	92dbc66a225a6       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   d722173cf2abe       snapshot-controller-7d9fbc56b8-nwkcn       kube-system
	3214cef25f9cd       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             3 minutes ago        Exited              patch                                    1                   ed6df1585bf5d       ingress-nginx-admission-patch-kpcl9        ingress-nginx
	e8c5cef164c32       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago        Exited              create                                   0                   fc7f842cda60f       ingress-nginx-admission-create-2bvdg       ingress-nginx
	88c5555ce7baf       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   a389a29ca91f4       local-path-provisioner-648f6765c9-wsqdl    local-path-storage
	f62ecc77b12f0       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               3 minutes ago        Running             cloud-spanner-emulator                   0                   cb4dbe29313b3       cloud-spanner-emulator-6f9fcf858b-nnvcj    default
	61c26678bcffa       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   1c90d62b414ce       registry-6b586f9694-mqjdw                  kube-system
	c485f7a9c3e2b       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   1205721f2368e       metrics-server-85b7d694d7-m85dj            kube-system
	7370b2befcb1e       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   e6cf854e50f39       kube-ingress-dns-minikube                  kube-system
	79d436a219f2f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   8300ce2cf4229       coredns-66bc5c9577-cjxcs                   kube-system
	73844762f5663       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   b42d483742979       storage-provisioner                        kube-system
	bb9cab6c50c64       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   9ce03c7023a0e       kube-proxy-jlh5q                           kube-system
	ab522c42d68a8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   d9711bc751312       kindnet-wq26q                              kube-system
	6dd9f12c0f48a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago        Running             etcd                                     0                   a604006d5d7ac       etcd-addons-454747                         kube-system
	a73de86856e0e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago        Running             kube-controller-manager                  0                   86be02d7f77e6       kube-controller-manager-addons-454747      kube-system
	475fb5d70b555       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago        Running             kube-apiserver                           0                   06152db044df1       kube-apiserver-addons-454747               kube-system
	b4dce63e838db       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago        Running             kube-scheduler                           0                   b3ecfc13a7179       kube-scheduler-addons-454747               kube-system
	
	
	==> coredns [79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c] <==
	[INFO] 10.244.0.22:58428 - 3630 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005117315s
	[INFO] 10.244.0.22:47578 - 48598 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004691508s
	[INFO] 10.244.0.22:56891 - 62201 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004787414s
	[INFO] 10.244.0.22:57346 - 23454 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005262059s
	[INFO] 10.244.0.22:50455 - 3251 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005379554s
	[INFO] 10.244.0.22:37324 - 56591 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000861225s
	[INFO] 10.244.0.22:41530 - 45664 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.00208366s
	[INFO] 10.244.0.27:56655 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0002703s
	[INFO] 10.244.0.27:40442 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000193982s
	[INFO] 10.244.0.31:56915 - 30871 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000248642s
	[INFO] 10.244.0.31:44730 - 54704 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000347416s
	[INFO] 10.244.0.31:51279 - 37475 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000139186s
	[INFO] 10.244.0.31:44954 - 43466 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000186607s
	[INFO] 10.244.0.31:42369 - 54002 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000111994s
	[INFO] 10.244.0.31:37986 - 64568 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000142357s
	[INFO] 10.244.0.31:52896 - 44442 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003612039s
	[INFO] 10.244.0.31:33820 - 44181 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.004292484s
	[INFO] 10.244.0.31:60680 - 59176 "AAAA IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.005087837s
	[INFO] 10.244.0.31:38783 - 270 "A IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.005283286s
	[INFO] 10.244.0.31:58746 - 9007 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004656132s
	[INFO] 10.244.0.31:51025 - 59113 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004797161s
	[INFO] 10.244.0.31:46283 - 36944 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004684752s
	[INFO] 10.244.0.31:36737 - 31446 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004839272s
	[INFO] 10.244.0.31:44049 - 13397 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001778351s
	[INFO] 10.244.0.31:47790 - 40357 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001881649s
	
	
	==> describe nodes <==
	Name:               addons-454747
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-454747
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=addons-454747
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_09_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-454747
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-454747"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:09:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-454747
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:13:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:12:45 +0000   Sat, 15 Nov 2025 09:09:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:12:45 +0000   Sat, 15 Nov 2025 09:09:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:12:45 +0000   Sat, 15 Nov 2025 09:09:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:12:45 +0000   Sat, 15 Nov 2025 09:09:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-454747
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                770a3a40-fc20-448c-8377-e5435651e3a8
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  default                     cloud-spanner-emulator-6f9fcf858b-nnvcj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  default                     hello-world-app-5d498dc89-b64zs             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-5lh8b                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  gcp-auth                    gcp-auth-78565c9fb4-gtlhb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-vhvjt    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m24s
	  kube-system                 amd-gpu-device-plugin-z8k7m                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 coredns-66bc5c9577-cjxcs                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m25s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 csi-hostpathplugin-zkcmq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-addons-454747                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m31s
	  kube-system                 kindnet-wq26q                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m25s
	  kube-system                 kube-apiserver-addons-454747                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-controller-manager-addons-454747       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-proxy-jlh5q                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-scheduler-addons-454747                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 metrics-server-85b7d694d7-m85dj             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m24s
	  kube-system                 nvidia-device-plugin-daemonset-58w8g        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 registry-6b586f9694-mqjdw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 registry-creds-764b6fb674-gckbr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 registry-proxy-pspnm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 snapshot-controller-7d9fbc56b8-nwkcn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 snapshot-controller-7d9fbc56b8-t9lwf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  local-path-storage          local-path-provisioner-648f6765c9-wsqdl     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-lzndj              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m23s                  kube-proxy       
	  Normal  Starting                 4m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m36s (x8 over 4m36s)  kubelet          Node addons-454747 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s (x8 over 4m36s)  kubelet          Node addons-454747 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s (x8 over 4m36s)  kubelet          Node addons-454747 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m31s                  kubelet          Node addons-454747 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m31s                  kubelet          Node addons-454747 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m31s                  kubelet          Node addons-454747 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m26s                  node-controller  Node addons-454747 event: Registered Node addons-454747 in Controller
	  Normal  NodeReady                3m44s                  kubelet          Node addons-454747 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072] <==
	{"level":"warn","ts":"2025-11-15T09:09:07.647427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.654019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.660275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.667237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.684597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.691823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.704108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.710260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.716848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.722871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.728614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.734715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.741050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.771138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.777348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.783549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.831228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:18.673012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:18.680131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:45.232967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:45.239805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:45.250672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:45.256836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37994","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T09:10:40.818352Z","caller":"traceutil/trace.go:172","msg":"trace[829510516] transaction","detail":"{read_only:false; response_revision:1237; number_of_response:1; }","duration":"106.965678ms","start":"2025-11-15T09:10:40.711369Z","end":"2025-11-15T09:10:40.818334Z","steps":["trace[829510516] 'process raft request'  (duration: 106.850155ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:10:52.609184Z","caller":"traceutil/trace.go:172","msg":"trace[127138712] transaction","detail":"{read_only:false; response_revision:1296; number_of_response:1; }","duration":"129.342764ms","start":"2025-11-15T09:10:52.479825Z","end":"2025-11-15T09:10:52.609168Z","steps":["trace[127138712] 'process raft request'  (duration: 129.212638ms)"],"step_count":1}
	
	
	==> gcp-auth [ca82589ebc0974a7dfdb0ba2b8e31093ad90584fc9cd7c1cdf70a130408f4837] <==
	2025/11/15 09:10:35 GCP Auth Webhook started!
	2025/11/15 09:10:46 Ready to marshal response ...
	2025/11/15 09:10:46 Ready to write response ...
	2025/11/15 09:10:46 Ready to marshal response ...
	2025/11/15 09:10:46 Ready to write response ...
	2025/11/15 09:10:46 Ready to marshal response ...
	2025/11/15 09:10:46 Ready to write response ...
	2025/11/15 09:10:57 Ready to marshal response ...
	2025/11/15 09:10:57 Ready to write response ...
	2025/11/15 09:10:57 Ready to marshal response ...
	2025/11/15 09:10:57 Ready to write response ...
	2025/11/15 09:11:06 Ready to marshal response ...
	2025/11/15 09:11:06 Ready to write response ...
	2025/11/15 09:11:08 Ready to marshal response ...
	2025/11/15 09:11:08 Ready to write response ...
	2025/11/15 09:11:11 Ready to marshal response ...
	2025/11/15 09:11:11 Ready to write response ...
	2025/11/15 09:11:14 Ready to marshal response ...
	2025/11/15 09:11:14 Ready to write response ...
	2025/11/15 09:11:41 Ready to marshal response ...
	2025/11/15 09:11:41 Ready to write response ...
	2025/11/15 09:13:39 Ready to marshal response ...
	2025/11/15 09:13:39 Ready to write response ...
	
	
	==> kernel <==
	 09:13:41 up 56 min,  0 user,  load average: 0.24, 1.51, 2.09
	Linux addons-454747 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641] <==
	I1115 09:11:37.357053       1 main.go:301] handling current node
	I1115 09:11:47.356677       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:11:47.356727       1 main.go:301] handling current node
	I1115 09:11:57.356370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:11:57.356446       1 main.go:301] handling current node
	I1115 09:12:07.357269       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:12:07.357303       1 main.go:301] handling current node
	I1115 09:12:17.356433       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:12:17.356487       1 main.go:301] handling current node
	I1115 09:12:27.359037       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:12:27.359073       1 main.go:301] handling current node
	I1115 09:12:37.356386       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:12:37.356453       1 main.go:301] handling current node
	I1115 09:12:47.356739       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:12:47.356768       1 main.go:301] handling current node
	I1115 09:12:57.359134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:12:57.359191       1 main.go:301] handling current node
	I1115 09:13:07.361422       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:13:07.361455       1 main.go:301] handling current node
	I1115 09:13:17.356629       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:13:17.356664       1 main.go:301] handling current node
	I1115 09:13:27.357046       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:13:27.357080       1 main.go:301] handling current node
	I1115 09:13:37.362223       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:13:37.362256       1 main.go:301] handling current node
	
	
	==> kube-apiserver [475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1115 09:10:06.834573       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.210.113:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.210.113:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.210.113:443: connect: connection refused" logger="UnhandledError"
	W1115 09:10:07.836564       1 handler_proxy.go:99] no RequestInfo found in the context
	W1115 09:10:07.836592       1 handler_proxy.go:99] no RequestInfo found in the context
	E1115 09:10:07.836619       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1115 09:10:07.836634       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1115 09:10:07.836660       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1115 09:10:07.837674       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1115 09:10:08.291832       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1115 09:10:11.845120       1 handler_proxy.go:99] no RequestInfo found in the context
	E1115 09:10:11.845178       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1115 09:10:11.845180       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.210.113:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.210.113:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1115 09:10:56.645838       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46000: use of closed network connection
	E1115 09:10:56.798950       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46024: use of closed network connection
	I1115 09:11:14.154456       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1115 09:11:14.388580       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.74.1"}
	I1115 09:11:21.438305       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1115 09:13:39.750667       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.52.13"}
	
	
	==> kube-controller-manager [a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b] <==
	I1115 09:09:15.215335       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 09:09:15.215377       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 09:09:15.215402       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 09:09:15.215422       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 09:09:15.215473       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 09:09:15.215511       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 09:09:15.215555       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 09:09:15.215629       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 09:09:15.215718       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 09:09:15.215740       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 09:09:15.215921       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 09:09:15.217947       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 09:09:15.222034       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:09:15.226192       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 09:09:15.231584       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 09:09:15.233873       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 09:09:15.239082       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	E1115 09:09:45.226940       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1115 09:09:45.227115       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1115 09:09:45.227174       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1115 09:09:45.241121       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1115 09:09:45.244968       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1115 09:09:45.328033       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:09:45.345387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 09:10:00.174595       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a] <==
	I1115 09:09:17.038868       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:09:17.339709       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:09:17.441569       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:09:17.446500       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:09:17.446669       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:09:17.629759       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:09:17.629921       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:09:17.638576       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:09:17.639017       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:09:17.639898       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:09:17.642286       1 config.go:200] "Starting service config controller"
	I1115 09:09:17.642336       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:09:17.642441       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:09:17.642457       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:09:17.642856       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:09:17.642896       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:09:17.643208       1 config.go:309] "Starting node config controller"
	I1115 09:09:17.643235       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:09:17.643245       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:09:17.742482       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:09:17.743691       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 09:09:17.743777       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f] <==
	E1115 09:09:08.238032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:09:08.238109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:09:08.238126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:09:08.238234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:09:08.238301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:09:08.238474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:09:08.238500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:09:08.238628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:09:08.238657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:09:08.238704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:09:08.238754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:09:08.238782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:09:08.238804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:09:08.238799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:09:08.239196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:09:09.145250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:09:09.265103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:09:09.341576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:09:09.350583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:09:09.379693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:09:09.390864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:09:09.392711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:09:09.405713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:09:09.423800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1115 09:09:09.834594       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 09:11:44 addons-454747 kubelet[1303]: I1115 09:11:44.284386    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=2.110285338 podStartE2EDuration="3.284363496s" podCreationTimestamp="2025-11-15 09:11:41 +0000 UTC" firstStartedPulling="2025-11-15 09:11:42.306926556 +0000 UTC m=+151.747093306" lastFinishedPulling="2025-11-15 09:11:43.481004726 +0000 UTC m=+152.921171464" observedRunningTime="2025-11-15 09:11:44.284124561 +0000 UTC m=+153.724291320" watchObservedRunningTime="2025-11-15 09:11:44.284363496 +0000 UTC m=+153.724530256"
	Nov 15 09:11:50 addons-454747 kubelet[1303]: I1115 09:11:50.032233    1303 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6lwm\" (UniqueName: \"kubernetes.io/projected/8b6df6a5-5c51-4826-9ed9-2fca2e93beb5-kube-api-access-t6lwm\") pod \"8b6df6a5-5c51-4826-9ed9-2fca2e93beb5\" (UID: \"8b6df6a5-5c51-4826-9ed9-2fca2e93beb5\") "
	Nov 15 09:11:50 addons-454747 kubelet[1303]: I1115 09:11:50.032324    1303 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8b6df6a5-5c51-4826-9ed9-2fca2e93beb5-gcp-creds\") pod \"8b6df6a5-5c51-4826-9ed9-2fca2e93beb5\" (UID: \"8b6df6a5-5c51-4826-9ed9-2fca2e93beb5\") "
	Nov 15 09:11:50 addons-454747 kubelet[1303]: I1115 09:11:50.032444    1303 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b6df6a5-5c51-4826-9ed9-2fca2e93beb5-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "8b6df6a5-5c51-4826-9ed9-2fca2e93beb5" (UID: "8b6df6a5-5c51-4826-9ed9-2fca2e93beb5"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 15 09:11:50 addons-454747 kubelet[1303]: I1115 09:11:50.032539    1303 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^192e84d9-c203-11f0-afc4-ee51f342b8d8\") pod \"8b6df6a5-5c51-4826-9ed9-2fca2e93beb5\" (UID: \"8b6df6a5-5c51-4826-9ed9-2fca2e93beb5\") "
	Nov 15 09:11:50 addons-454747 kubelet[1303]: I1115 09:11:50.032729    1303 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8b6df6a5-5c51-4826-9ed9-2fca2e93beb5-gcp-creds\") on node \"addons-454747\" DevicePath \"\""
	Nov 15 09:11:50 addons-454747 kubelet[1303]: I1115 09:11:50.034806    1303 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b6df6a5-5c51-4826-9ed9-2fca2e93beb5-kube-api-access-t6lwm" (OuterVolumeSpecName: "kube-api-access-t6lwm") pod "8b6df6a5-5c51-4826-9ed9-2fca2e93beb5" (UID: "8b6df6a5-5c51-4826-9ed9-2fca2e93beb5"). InnerVolumeSpecName "kube-api-access-t6lwm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 15 09:11:50 addons-454747 kubelet[1303]: I1115 09:11:50.035959    1303 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^192e84d9-c203-11f0-afc4-ee51f342b8d8" (OuterVolumeSpecName: "task-pv-storage") pod "8b6df6a5-5c51-4826-9ed9-2fca2e93beb5" (UID: "8b6df6a5-5c51-4826-9ed9-2fca2e93beb5"). InnerVolumeSpecName "pvc-cccbecb8-5cc3-410f-93dc-8415f3352433". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 15 09:11:50 addons-454747 kubelet[1303]: I1115 09:11:50.133736    1303 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t6lwm\" (UniqueName: \"kubernetes.io/projected/8b6df6a5-5c51-4826-9ed9-2fca2e93beb5-kube-api-access-t6lwm\") on node \"addons-454747\" DevicePath \"\""
	Nov 15 09:11:50 addons-454747 kubelet[1303]: I1115 09:11:50.133804    1303 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-cccbecb8-5cc3-410f-93dc-8415f3352433\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^192e84d9-c203-11f0-afc4-ee51f342b8d8\") on node \"addons-454747\" "
	Nov 15 09:11:50 addons-454747 kubelet[1303]: I1115 09:11:50.138371    1303 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-cccbecb8-5cc3-410f-93dc-8415f3352433" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^192e84d9-c203-11f0-afc4-ee51f342b8d8") on node "addons-454747"
	Nov 15 09:11:50 addons-454747 kubelet[1303]: I1115 09:11:50.234564    1303 reconciler_common.go:299] "Volume detached for volume \"pvc-cccbecb8-5cc3-410f-93dc-8415f3352433\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^192e84d9-c203-11f0-afc4-ee51f342b8d8\") on node \"addons-454747\" DevicePath \"\""
	Nov 15 09:11:50 addons-454747 kubelet[1303]: I1115 09:11:50.299904    1303 scope.go:117] "RemoveContainer" containerID="28a493c5ddb70eeb945efe46e682ab1dfe7d03bb9eaeec71259c9b51ede3b052"
	Nov 15 09:11:50 addons-454747 kubelet[1303]: I1115 09:11:50.309042    1303 scope.go:117] "RemoveContainer" containerID="28a493c5ddb70eeb945efe46e682ab1dfe7d03bb9eaeec71259c9b51ede3b052"
	Nov 15 09:11:50 addons-454747 kubelet[1303]: E1115 09:11:50.309422    1303 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a493c5ddb70eeb945efe46e682ab1dfe7d03bb9eaeec71259c9b51ede3b052\": container with ID starting with 28a493c5ddb70eeb945efe46e682ab1dfe7d03bb9eaeec71259c9b51ede3b052 not found: ID does not exist" containerID="28a493c5ddb70eeb945efe46e682ab1dfe7d03bb9eaeec71259c9b51ede3b052"
	Nov 15 09:11:50 addons-454747 kubelet[1303]: I1115 09:11:50.309472    1303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a493c5ddb70eeb945efe46e682ab1dfe7d03bb9eaeec71259c9b51ede3b052"} err="failed to get container status \"28a493c5ddb70eeb945efe46e682ab1dfe7d03bb9eaeec71259c9b51ede3b052\": rpc error: code = NotFound desc = could not find container \"28a493c5ddb70eeb945efe46e682ab1dfe7d03bb9eaeec71259c9b51ede3b052\": container with ID starting with 28a493c5ddb70eeb945efe46e682ab1dfe7d03bb9eaeec71259c9b51ede3b052 not found: ID does not exist"
	Nov 15 09:11:50 addons-454747 kubelet[1303]: I1115 09:11:50.640515    1303 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b6df6a5-5c51-4826-9ed9-2fca2e93beb5" path="/var/lib/kubelet/pods/8b6df6a5-5c51-4826-9ed9-2fca2e93beb5/volumes"
	Nov 15 09:11:51 addons-454747 kubelet[1303]: I1115 09:11:51.638728    1303 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-z8k7m" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:12:00 addons-454747 kubelet[1303]: E1115 09:12:00.710552    1303 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-gckbr" podUID="799c7fb7-4643-4a6c-ad1f-e02d10f99902"
	Nov 15 09:12:14 addons-454747 kubelet[1303]: I1115 09:12:14.408744    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-gckbr" podStartSLOduration=176.896128755 podStartE2EDuration="2m58.408724801s" podCreationTimestamp="2025-11-15 09:09:16 +0000 UTC" firstStartedPulling="2025-11-15 09:12:12.664378752 +0000 UTC m=+182.104545511" lastFinishedPulling="2025-11-15 09:12:14.176974814 +0000 UTC m=+183.617141557" observedRunningTime="2025-11-15 09:12:14.408626488 +0000 UTC m=+183.848793247" watchObservedRunningTime="2025-11-15 09:12:14.408724801 +0000 UTC m=+183.848891562"
	Nov 15 09:12:38 addons-454747 kubelet[1303]: I1115 09:12:38.638455    1303 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-pspnm" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:13:07 addons-454747 kubelet[1303]: I1115 09:13:07.638251    1303 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-58w8g" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:13:10 addons-454747 kubelet[1303]: I1115 09:13:10.639665    1303 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-z8k7m" secret="" err="secret \"gcp-auth\" not found"
	Nov 15 09:13:39 addons-454747 kubelet[1303]: I1115 09:13:39.758577    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/840010fb-687d-4417-9bef-a3f896d49a18-gcp-creds\") pod \"hello-world-app-5d498dc89-b64zs\" (UID: \"840010fb-687d-4417-9bef-a3f896d49a18\") " pod="default/hello-world-app-5d498dc89-b64zs"
	Nov 15 09:13:39 addons-454747 kubelet[1303]: I1115 09:13:39.758643    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7lh5\" (UniqueName: \"kubernetes.io/projected/840010fb-687d-4417-9bef-a3f896d49a18-kube-api-access-t7lh5\") pod \"hello-world-app-5d498dc89-b64zs\" (UID: \"840010fb-687d-4417-9bef-a3f896d49a18\") " pod="default/hello-world-app-5d498dc89-b64zs"
	
	
	==> storage-provisioner [73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9] <==
	W1115 09:13:17.190095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:19.193553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:19.197342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:21.200875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:21.205895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:23.209902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:23.213715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:25.217404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:25.221382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:27.224582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:27.229964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:29.233666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:29.237691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:31.241163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:31.245443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:33.249021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:33.253019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:35.256272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:35.260122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:37.262827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:37.266655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:39.270373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:39.274129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:41.277592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:13:41.281880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-454747 -n addons-454747
helpers_test.go:269: (dbg) Run:  kubectl --context addons-454747 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-2bvdg ingress-nginx-admission-patch-kpcl9
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-454747 describe pod ingress-nginx-admission-create-2bvdg ingress-nginx-admission-patch-kpcl9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-454747 describe pod ingress-nginx-admission-create-2bvdg ingress-nginx-admission-patch-kpcl9: exit status 1 (61.360466ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2bvdg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kpcl9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-454747 describe pod ingress-nginx-admission-create-2bvdg ingress-nginx-admission-patch-kpcl9: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (263.722047ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:13:42.212334  375104 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:13:42.212496  375104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:13:42.212507  375104 out.go:374] Setting ErrFile to fd 2...
	I1115 09:13:42.212513  375104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:13:42.212737  375104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:13:42.213037  375104 mustload.go:66] Loading cluster: addons-454747
	I1115 09:13:42.213435  375104 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:13:42.213457  375104 addons.go:607] checking whether the cluster is paused
	I1115 09:13:42.213569  375104 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:13:42.213587  375104 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:13:42.213980  375104 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:13:42.234343  375104 ssh_runner.go:195] Run: systemctl --version
	I1115 09:13:42.234425  375104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:13:42.256701  375104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:13:42.352495  375104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:13:42.352588  375104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:13:42.382717  375104 cri.go:89] found id: "a576f01824b0718f09b2dec682f537bf82e9348d66322e53f9569bd136126ff8"
	I1115 09:13:42.382744  375104 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:13:42.382750  375104 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:13:42.382754  375104 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:13:42.382758  375104 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:13:42.382762  375104 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:13:42.382765  375104 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:13:42.382769  375104 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:13:42.382773  375104 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:13:42.382781  375104 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:13:42.382785  375104 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:13:42.382789  375104 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:13:42.382794  375104 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:13:42.382799  375104 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:13:42.382817  375104 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:13:42.382825  375104 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:13:42.382832  375104 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:13:42.382837  375104 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:13:42.382856  375104 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:13:42.382863  375104 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:13:42.382872  375104 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:13:42.382879  375104 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:13:42.382883  375104 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:13:42.382888  375104 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:13:42.382893  375104 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:13:42.382900  375104 cri.go:89] found id: ""
	I1115 09:13:42.382946  375104 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:13:42.397431  375104 out.go:203] 
	W1115 09:13:42.398762  375104 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:13:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:13:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:13:42.398781  375104 out.go:285] * 
	* 
	W1115 09:13:42.402782  375104 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:13:42.404528  375104 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-454747 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 addons disable ingress --alsologtostderr -v=1: exit status 11 (244.721204ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:13:42.466332  375188 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:13:42.466623  375188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:13:42.466632  375188 out.go:374] Setting ErrFile to fd 2...
	I1115 09:13:42.466636  375188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:13:42.466808  375188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:13:42.467058  375188 mustload.go:66] Loading cluster: addons-454747
	I1115 09:13:42.467434  375188 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:13:42.467450  375188 addons.go:607] checking whether the cluster is paused
	I1115 09:13:42.467533  375188 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:13:42.467545  375188 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:13:42.467892  375188 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:13:42.486192  375188 ssh_runner.go:195] Run: systemctl --version
	I1115 09:13:42.486251  375188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:13:42.503921  375188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:13:42.597338  375188 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:13:42.597456  375188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:13:42.627774  375188 cri.go:89] found id: "a576f01824b0718f09b2dec682f537bf82e9348d66322e53f9569bd136126ff8"
	I1115 09:13:42.627816  375188 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:13:42.627824  375188 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:13:42.627829  375188 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:13:42.627834  375188 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:13:42.627840  375188 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:13:42.627844  375188 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:13:42.627849  375188 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:13:42.627854  375188 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:13:42.627868  375188 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:13:42.627876  375188 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:13:42.627881  375188 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:13:42.627888  375188 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:13:42.627893  375188 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:13:42.627899  375188 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:13:42.627910  375188 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:13:42.627917  375188 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:13:42.627923  375188 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:13:42.627926  375188 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:13:42.627930  375188 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:13:42.627933  375188 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:13:42.627935  375188 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:13:42.627937  375188 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:13:42.627940  375188 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:13:42.627942  375188 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:13:42.627944  375188 cri.go:89] found id: ""
	I1115 09:13:42.627989  375188 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:13:42.642832  375188 out.go:203] 
	W1115 09:13:42.644157  375188 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:13:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:13:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:13:42.644174  375188 out.go:285] * 
	* 
	W1115 09:13:42.648157  375188 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:13:42.649496  375188 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-454747 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.78s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-5lh8b" [eeb5bb47-2f6f-4fee-969d-184fbe13525b] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004627441s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (306.410643ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:11:13.644058  370011 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:11:13.644428  370011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:13.644450  370011 out.go:374] Setting ErrFile to fd 2...
	I1115 09:11:13.644458  370011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:13.644774  370011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:11:13.645211  370011 mustload.go:66] Loading cluster: addons-454747
	I1115 09:11:13.645761  370011 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:13.645788  370011 addons.go:607] checking whether the cluster is paused
	I1115 09:11:13.645937  370011 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:13.645957  370011 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:11:13.646548  370011 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:11:13.670743  370011 ssh_runner.go:195] Run: systemctl --version
	I1115 09:11:13.670856  370011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:11:13.695512  370011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:11:13.800237  370011 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:11:13.800337  370011 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:11:13.836747  370011 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:11:13.836791  370011 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:11:13.836797  370011 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:11:13.836801  370011 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:11:13.836806  370011 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:11:13.836811  370011 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:11:13.836815  370011 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:11:13.836819  370011 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:11:13.836823  370011 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:11:13.836831  370011 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:11:13.836836  370011 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:11:13.836840  370011 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:11:13.836845  370011 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:11:13.836849  370011 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:11:13.836853  370011 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:11:13.836870  370011 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:11:13.836881  370011 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:11:13.836886  370011 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:11:13.836890  370011 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:11:13.836894  370011 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:11:13.836898  370011 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:11:13.836902  370011 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:11:13.836906  370011 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:11:13.836910  370011 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:11:13.836914  370011 cri.go:89] found id: ""
	I1115 09:11:13.836961  370011 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:11:13.854914  370011 out.go:203] 
	W1115 09:11:13.856314  370011 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:11:13.856353  370011 out.go:285] * 
	* 
	W1115 09:11:13.861926  370011 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:11:13.863509  370011 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-454747 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.823387ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-m85dj" [0cd080a3-9d1c-497f-8366-db37bda2a923] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003272586s
addons_test.go:463: (dbg) Run:  kubectl --context addons-454747 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (246.982873ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:11:17.728860  371042 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:11:17.729152  371042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:17.729165  371042 out.go:374] Setting ErrFile to fd 2...
	I1115 09:11:17.729171  371042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:17.729357  371042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:11:17.729659  371042 mustload.go:66] Loading cluster: addons-454747
	I1115 09:11:17.730014  371042 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:17.730031  371042 addons.go:607] checking whether the cluster is paused
	I1115 09:11:17.730113  371042 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:17.730125  371042 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:11:17.730511  371042 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:11:17.748281  371042 ssh_runner.go:195] Run: systemctl --version
	I1115 09:11:17.748349  371042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:11:17.765725  371042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:11:17.858081  371042 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:11:17.858147  371042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:11:17.887008  371042 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:11:17.887033  371042 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:11:17.887039  371042 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:11:17.887043  371042 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:11:17.887046  371042 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:11:17.887048  371042 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:11:17.887051  371042 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:11:17.887053  371042 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:11:17.887056  371042 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:11:17.887063  371042 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:11:17.887067  371042 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:11:17.887071  371042 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:11:17.887075  371042 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:11:17.887084  371042 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:11:17.887088  371042 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:11:17.887106  371042 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:11:17.887114  371042 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:11:17.887120  371042 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:11:17.887124  371042 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:11:17.887129  371042 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:11:17.887133  371042 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:11:17.887136  371042 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:11:17.887139  371042 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:11:17.887141  371042 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:11:17.887144  371042 cri.go:89] found id: ""
	I1115 09:11:17.887181  371042 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:11:17.901470  371042 out.go:203] 
	W1115 09:11:17.902685  371042 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:11:17.902706  371042 out.go:285] * 
	* 
	W1115 09:11:17.906748  371042 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:11:17.908161  371042 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-454747 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.1s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.231884ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-454747 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-454747 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [ae0c0552-d79c-4749-8d8c-863c2a7f57a4] Pending
helpers_test.go:352: "task-pv-pod" [ae0c0552-d79c-4749-8d8c-863c2a7f57a4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [ae0c0552-d79c-4749-8d8c-863c2a7f57a4] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003985884s
addons_test.go:572: (dbg) Run:  kubectl --context addons-454747 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-454747 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-454747 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-454747 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-454747 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-454747 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-454747 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [8b6df6a5-5c51-4826-9ed9-2fca2e93beb5] Pending
helpers_test.go:352: "task-pv-pod-restore" [8b6df6a5-5c51-4826-9ed9-2fca2e93beb5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [8b6df6a5-5c51-4826-9ed9-2fca2e93beb5] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004584714s
addons_test.go:614: (dbg) Run:  kubectl --context addons-454747 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-454747 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-454747 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (254.722078ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:11:50.713301  372836 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:11:50.713590  372836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:50.713600  372836 out.go:374] Setting ErrFile to fd 2...
	I1115 09:11:50.713604  372836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:50.713814  372836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:11:50.714098  372836 mustload.go:66] Loading cluster: addons-454747
	I1115 09:11:50.714451  372836 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:50.714467  372836 addons.go:607] checking whether the cluster is paused
	I1115 09:11:50.714550  372836 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:50.714562  372836 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:11:50.714928  372836 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:11:50.733149  372836 ssh_runner.go:195] Run: systemctl --version
	I1115 09:11:50.733213  372836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:11:50.750905  372836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:11:50.844930  372836 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:11:50.845000  372836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:11:50.875265  372836 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:11:50.875302  372836 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:11:50.875307  372836 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:11:50.875310  372836 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:11:50.875313  372836 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:11:50.875318  372836 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:11:50.875320  372836 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:11:50.875323  372836 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:11:50.875325  372836 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:11:50.875336  372836 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:11:50.875339  372836 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:11:50.875341  372836 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:11:50.875344  372836 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:11:50.875346  372836 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:11:50.875349  372836 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:11:50.875363  372836 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:11:50.875373  372836 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:11:50.875380  372836 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:11:50.875384  372836 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:11:50.875388  372836 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:11:50.875403  372836 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:11:50.875407  372836 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:11:50.875412  372836 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:11:50.875416  372836 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:11:50.875420  372836 cri.go:89] found id: ""
	I1115 09:11:50.875481  372836 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:11:50.890292  372836 out.go:203] 
	W1115 09:11:50.891709  372836 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:11:50.891730  372836 out.go:285] * 
	* 
	W1115 09:11:50.897330  372836 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:11:50.898924  372836 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-454747 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (245.790981ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:11:50.964543  372899 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:11:50.964815  372899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:50.964828  372899 out.go:374] Setting ErrFile to fd 2...
	I1115 09:11:50.964833  372899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:50.965094  372899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:11:50.965440  372899 mustload.go:66] Loading cluster: addons-454747
	I1115 09:11:50.965841  372899 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:50.965860  372899 addons.go:607] checking whether the cluster is paused
	I1115 09:11:50.965968  372899 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:50.965987  372899 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:11:50.966422  372899 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:11:50.985145  372899 ssh_runner.go:195] Run: systemctl --version
	I1115 09:11:50.985231  372899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:11:51.002286  372899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:11:51.095247  372899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:11:51.095333  372899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:11:51.124970  372899 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:11:51.125007  372899 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:11:51.125013  372899 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:11:51.125018  372899 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:11:51.125021  372899 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:11:51.125027  372899 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:11:51.125031  372899 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:11:51.125034  372899 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:11:51.125037  372899 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:11:51.125050  372899 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:11:51.125054  372899 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:11:51.125058  372899 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:11:51.125062  372899 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:11:51.125066  372899 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:11:51.125071  372899 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:11:51.125089  372899 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:11:51.125100  372899 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:11:51.125106  372899 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:11:51.125110  372899 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:11:51.125114  372899 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:11:51.125117  372899 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:11:51.125121  372899 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:11:51.125124  372899 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:11:51.125128  372899 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:11:51.125131  372899 cri.go:89] found id: ""
	I1115 09:11:51.125194  372899 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:11:51.139978  372899 out.go:203] 
	W1115 09:11:51.141621  372899 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:11:51.141654  372899 out.go:285] * 
	* 
	W1115 09:11:51.145791  372899 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:11:51.147378  372899 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-454747 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (54.10s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-454747 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-454747 --alsologtostderr -v=1: exit status 11 (245.785904ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:11:17.970732  371109 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:11:17.971019  371109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:17.971030  371109 out.go:374] Setting ErrFile to fd 2...
	I1115 09:11:17.971034  371109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:17.971267  371109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:11:17.971584  371109 mustload.go:66] Loading cluster: addons-454747
	I1115 09:11:17.972373  371109 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:17.972434  371109 addons.go:607] checking whether the cluster is paused
	I1115 09:11:17.972638  371109 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:17.972653  371109 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:11:17.973575  371109 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:11:17.991816  371109 ssh_runner.go:195] Run: systemctl --version
	I1115 09:11:17.991873  371109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:11:18.011712  371109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:11:18.104245  371109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:11:18.104336  371109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:11:18.132758  371109 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:11:18.132779  371109 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:11:18.132783  371109 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:11:18.132788  371109 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:11:18.132791  371109 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:11:18.132795  371109 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:11:18.132798  371109 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:11:18.132801  371109 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:11:18.132806  371109 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:11:18.132816  371109 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:11:18.132824  371109 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:11:18.132829  371109 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:11:18.132837  371109 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:11:18.132842  371109 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:11:18.132850  371109 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:11:18.132856  371109 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:11:18.132862  371109 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:11:18.132866  371109 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:11:18.132869  371109 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:11:18.132871  371109 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:11:18.132874  371109 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:11:18.132876  371109 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:11:18.132879  371109 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:11:18.132885  371109 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:11:18.132890  371109 cri.go:89] found id: ""
	I1115 09:11:18.132936  371109 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:11:18.146834  371109 out.go:203] 
	W1115 09:11:18.148113  371109 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:11:18.148137  371109 out.go:285] * 
	* 
	W1115 09:11:18.152427  371109 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:11:18.153940  371109 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-454747 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-454747
helpers_test.go:243: (dbg) docker inspect addons-454747:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "931c889a25ac72d1510501361929e13fb49aacd67c533ffadd760b636c2a8ea3",
	        "Created": "2025-11-15T09:08:53.071755917Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 361079,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:08:53.105106011Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/931c889a25ac72d1510501361929e13fb49aacd67c533ffadd760b636c2a8ea3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/931c889a25ac72d1510501361929e13fb49aacd67c533ffadd760b636c2a8ea3/hostname",
	        "HostsPath": "/var/lib/docker/containers/931c889a25ac72d1510501361929e13fb49aacd67c533ffadd760b636c2a8ea3/hosts",
	        "LogPath": "/var/lib/docker/containers/931c889a25ac72d1510501361929e13fb49aacd67c533ffadd760b636c2a8ea3/931c889a25ac72d1510501361929e13fb49aacd67c533ffadd760b636c2a8ea3-json.log",
	        "Name": "/addons-454747",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-454747:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-454747",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "931c889a25ac72d1510501361929e13fb49aacd67c533ffadd760b636c2a8ea3",
	                "LowerDir": "/var/lib/docker/overlay2/98f418e46e4671b796ba0b1d33ac71bdb56f8d7d4259cc43606a461ab77d1226-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98f418e46e4671b796ba0b1d33ac71bdb56f8d7d4259cc43606a461ab77d1226/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98f418e46e4671b796ba0b1d33ac71bdb56f8d7d4259cc43606a461ab77d1226/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98f418e46e4671b796ba0b1d33ac71bdb56f8d7d4259cc43606a461ab77d1226/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-454747",
	                "Source": "/var/lib/docker/volumes/addons-454747/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-454747",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-454747",
	                "name.minikube.sigs.k8s.io": "addons-454747",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5c927aa05be2b299cb7cb65e10fa57832d3fe83b5685f4f2d37af98648fb98a8",
	            "SandboxKey": "/var/run/docker/netns/5c927aa05be2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-454747": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e7d1342f2c11565f602c3bd0dfb2d31a9a92160d201bf9a893b8dc748fe9244f",
	                    "EndpointID": "7ca09f174830ce94b08255c7ccb6cba5d49ce52a1573a670c226f2e89ceaf912",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "f2:a7:ec:2c:70:a1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-454747",
	                        "931c889a25ac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-454747 -n addons-454747
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-454747 logs -n 25: (1.175203669s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-934087                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-934087   │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │ 15 Nov 25 09:08 UTC │
	│ start   │ -o=json --download-only -p download-only-369450 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-369450   │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │ 15 Nov 25 09:08 UTC │
	│ delete  │ -p download-only-369450                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-369450   │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │ 15 Nov 25 09:08 UTC │
	│ delete  │ -p download-only-934087                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-934087   │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │ 15 Nov 25 09:08 UTC │
	│ delete  │ -p download-only-369450                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-369450   │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │ 15 Nov 25 09:08 UTC │
	│ start   │ --download-only -p download-docker-876877 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-876877 │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │                     │
	│ delete  │ -p download-docker-876877                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-876877 │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │ 15 Nov 25 09:08 UTC │
	│ start   │ --download-only -p binary-mirror-730212 --alsologtostderr --binary-mirror http://127.0.0.1:42111 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-730212   │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │                     │
	│ delete  │ -p binary-mirror-730212                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-730212   │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │ 15 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p addons-454747                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-454747          │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │                     │
	│ addons  │ disable dashboard -p addons-454747                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-454747          │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │                     │
	│ start   │ -p addons-454747 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-454747          │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │ 15 Nov 25 09:10 UTC │
	│ addons  │ addons-454747 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-454747          │ jenkins │ v1.37.0 │ 15 Nov 25 09:10 UTC │                     │
	│ addons  │ addons-454747 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-454747          │ jenkins │ v1.37.0 │ 15 Nov 25 09:10 UTC │                     │
	│ addons  │ addons-454747 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-454747          │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ ssh     │ addons-454747 ssh cat /opt/local-path-provisioner/pvc-cb0fe8e1-5280-47d2-a0f7-3e04a804af72_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-454747          │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │ 15 Nov 25 09:11 UTC │
	│ addons  │ addons-454747 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-454747          │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ addons  │ addons-454747 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-454747          │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ ip      │ addons-454747 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-454747          │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │ 15 Nov 25 09:11 UTC │
	│ addons  │ addons-454747 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-454747          │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ addons  │ addons-454747 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-454747          │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ addons  │ addons-454747 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-454747          │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ addons  │ addons-454747 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-454747          │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	│ addons  │ enable headlamp -p addons-454747 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-454747          │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:08:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:08:30.520592  360443 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:08:30.520894  360443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:08:30.520905  360443 out.go:374] Setting ErrFile to fd 2...
	I1115 09:08:30.520910  360443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:08:30.521138  360443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:08:30.521757  360443 out.go:368] Setting JSON to false
	I1115 09:08:30.522770  360443 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3051,"bootTime":1763194659,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:08:30.522883  360443 start.go:143] virtualization: kvm guest
	I1115 09:08:30.524759  360443 out.go:179] * [addons-454747] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:08:30.526035  360443 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:08:30.526034  360443 notify.go:221] Checking for updates...
	I1115 09:08:30.527591  360443 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:08:30.529136  360443 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:08:30.530442  360443 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 09:08:30.531774  360443 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:08:30.532907  360443 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:08:30.534245  360443 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:08:30.558319  360443 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:08:30.558422  360443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:08:30.614104  360443 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-15 09:08:30.604256949 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:08:30.614223  360443 docker.go:319] overlay module found
	I1115 09:08:30.615847  360443 out.go:179] * Using the docker driver based on user configuration
	I1115 09:08:30.617075  360443 start.go:309] selected driver: docker
	I1115 09:08:30.617093  360443 start.go:930] validating driver "docker" against <nil>
	I1115 09:08:30.617106  360443 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:08:30.617714  360443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:08:30.676046  360443 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-15 09:08:30.665694421 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:08:30.676198  360443 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:08:30.676456  360443 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:08:30.678733  360443 out.go:179] * Using Docker driver with root privileges
	I1115 09:08:30.680132  360443 cni.go:84] Creating CNI manager for ""
	I1115 09:08:30.680218  360443 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:08:30.680231  360443 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 09:08:30.680321  360443 start.go:353] cluster config:
	{Name:addons-454747 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-454747 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1115 09:08:30.681867  360443 out.go:179] * Starting "addons-454747" primary control-plane node in "addons-454747" cluster
	I1115 09:08:30.683166  360443 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:08:30.684497  360443 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:08:30.685561  360443 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:08:30.685610  360443 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 09:08:30.685640  360443 cache.go:65] Caching tarball of preloaded images
	I1115 09:08:30.685662  360443 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:08:30.685756  360443 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:08:30.685775  360443 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:08:30.686190  360443 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/config.json ...
	I1115 09:08:30.686223  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/config.json: {Name:mk47730805923e8dabc6c0167b68b1e7cdaa8bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:08:30.703537  360443 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 09:08:30.703680  360443 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1115 09:08:30.703705  360443 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1115 09:08:30.703709  360443 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1115 09:08:30.703721  360443 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1115 09:08:30.703726  360443 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1115 09:08:44.551564  360443 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1115 09:08:44.551619  360443 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:08:44.551665  360443 start.go:360] acquireMachinesLock for addons-454747: {Name:mk2e6cf2df2df659fccf71860e02c2b25f7f44a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:08:44.551794  360443 start.go:364] duration metric: took 99.288µs to acquireMachinesLock for "addons-454747"
	I1115 09:08:44.551827  360443 start.go:93] Provisioning new machine with config: &{Name:addons-454747 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-454747 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:08:44.551937  360443 start.go:125] createHost starting for "" (driver="docker")
	I1115 09:08:44.553730  360443 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1115 09:08:44.553985  360443 start.go:159] libmachine.API.Create for "addons-454747" (driver="docker")
	I1115 09:08:44.554021  360443 client.go:173] LocalClient.Create starting
	I1115 09:08:44.554120  360443 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem
	I1115 09:08:44.846755  360443 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem
	I1115 09:08:44.886166  360443 cli_runner.go:164] Run: docker network inspect addons-454747 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 09:08:44.903247  360443 cli_runner.go:211] docker network inspect addons-454747 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 09:08:44.903344  360443 network_create.go:284] running [docker network inspect addons-454747] to gather additional debugging logs...
	I1115 09:08:44.903369  360443 cli_runner.go:164] Run: docker network inspect addons-454747
	W1115 09:08:44.920264  360443 cli_runner.go:211] docker network inspect addons-454747 returned with exit code 1
	I1115 09:08:44.920314  360443 network_create.go:287] error running [docker network inspect addons-454747]: docker network inspect addons-454747: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-454747 not found
	I1115 09:08:44.920327  360443 network_create.go:289] output of [docker network inspect addons-454747]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-454747 not found
	
	** /stderr **
	I1115 09:08:44.920534  360443 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:08:44.937797  360443 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e82e80}
	I1115 09:08:44.937854  360443 network_create.go:124] attempt to create docker network addons-454747 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1115 09:08:44.937910  360443 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-454747 addons-454747
	I1115 09:08:44.984019  360443 network_create.go:108] docker network addons-454747 192.168.49.0/24 created
	I1115 09:08:44.984056  360443 kic.go:121] calculated static IP "192.168.49.2" for the "addons-454747" container
	I1115 09:08:44.984119  360443 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 09:08:45.000796  360443 cli_runner.go:164] Run: docker volume create addons-454747 --label name.minikube.sigs.k8s.io=addons-454747 --label created_by.minikube.sigs.k8s.io=true
	I1115 09:08:45.020696  360443 oci.go:103] Successfully created a docker volume addons-454747
	I1115 09:08:45.020811  360443 cli_runner.go:164] Run: docker run --rm --name addons-454747-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-454747 --entrypoint /usr/bin/test -v addons-454747:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 09:08:48.698730  360443 cli_runner.go:217] Completed: docker run --rm --name addons-454747-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-454747 --entrypoint /usr/bin/test -v addons-454747:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (3.677857103s)
	I1115 09:08:48.698770  360443 oci.go:107] Successfully prepared a docker volume addons-454747
	I1115 09:08:48.698848  360443 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:08:48.698861  360443 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 09:08:48.698921  360443 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-454747:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 09:08:52.999414  360443 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-454747:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.300433631s)
	I1115 09:08:52.999451  360443 kic.go:203] duration metric: took 4.300585717s to extract preloaded images to volume ...
	W1115 09:08:52.999567  360443 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1115 09:08:52.999624  360443 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1115 09:08:52.999670  360443 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 09:08:53.055152  360443 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-454747 --name addons-454747 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-454747 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-454747 --network addons-454747 --ip 192.168.49.2 --volume addons-454747:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 09:08:53.341958  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Running}}
	I1115 09:08:53.360788  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:08:53.379124  360443 cli_runner.go:164] Run: docker exec addons-454747 stat /var/lib/dpkg/alternatives/iptables
	I1115 09:08:53.429118  360443 oci.go:144] the created container "addons-454747" has a running status.
	I1115 09:08:53.429156  360443 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa...
	I1115 09:08:53.498032  360443 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 09:08:53.525547  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:08:53.542965  360443 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 09:08:53.542983  360443 kic_runner.go:114] Args: [docker exec --privileged addons-454747 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 09:08:53.611516  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:08:53.631809  360443 machine.go:94] provisionDockerMachine start ...
	I1115 09:08:53.631944  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:53.658493  360443 main.go:143] libmachine: Using SSH client type: native
	I1115 09:08:53.658863  360443 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1115 09:08:53.658887  360443 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:08:53.659755  360443 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60756->127.0.0.1:33144: read: connection reset by peer
	I1115 09:08:56.792073  360443 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-454747
	
	I1115 09:08:56.792119  360443 ubuntu.go:182] provisioning hostname "addons-454747"
	I1115 09:08:56.792187  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:56.811097  360443 main.go:143] libmachine: Using SSH client type: native
	I1115 09:08:56.811385  360443 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1115 09:08:56.811424  360443 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-454747 && echo "addons-454747" | sudo tee /etc/hostname
	I1115 09:08:56.951043  360443 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-454747
	
	I1115 09:08:56.951132  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:56.970355  360443 main.go:143] libmachine: Using SSH client type: native
	I1115 09:08:56.970648  360443 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1115 09:08:56.970675  360443 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-454747' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-454747/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-454747' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:08:57.101811  360443 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:08:57.101854  360443 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:08:57.101887  360443 ubuntu.go:190] setting up certificates
	I1115 09:08:57.101904  360443 provision.go:84] configureAuth start
	I1115 09:08:57.101981  360443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-454747
	I1115 09:08:57.123291  360443 provision.go:143] copyHostCerts
	I1115 09:08:57.123409  360443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:08:57.123571  360443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:08:57.123803  360443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:08:57.123921  360443 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.addons-454747 san=[127.0.0.1 192.168.49.2 addons-454747 localhost minikube]
	I1115 09:08:57.400263  360443 provision.go:177] copyRemoteCerts
	I1115 09:08:57.400348  360443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:08:57.400387  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:57.419834  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:08:57.515235  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:08:57.535650  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:08:57.554023  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 09:08:57.571921  360443 provision.go:87] duration metric: took 469.992652ms to configureAuth
	I1115 09:08:57.571950  360443 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:08:57.572132  360443 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:08:57.572240  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:57.591068  360443 main.go:143] libmachine: Using SSH client type: native
	I1115 09:08:57.591298  360443 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I1115 09:08:57.591314  360443 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:08:57.840428  360443 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:08:57.840494  360443 machine.go:97] duration metric: took 4.208654448s to provisionDockerMachine
	I1115 09:08:57.840510  360443 client.go:176] duration metric: took 13.28648022s to LocalClient.Create
	I1115 09:08:57.840537  360443 start.go:167] duration metric: took 13.286552258s to libmachine.API.Create "addons-454747"
	I1115 09:08:57.840547  360443 start.go:293] postStartSetup for "addons-454747" (driver="docker")
	I1115 09:08:57.840565  360443 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:08:57.840632  360443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:08:57.840684  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:57.858994  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:08:57.955912  360443 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:08:57.959755  360443 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:08:57.959782  360443 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:08:57.959794  360443 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:08:57.959857  360443 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:08:57.959881  360443 start.go:296] duration metric: took 119.326869ms for postStartSetup
	I1115 09:08:57.960174  360443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-454747
	I1115 09:08:57.979354  360443 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/config.json ...
	I1115 09:08:57.979662  360443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:08:57.979710  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:57.997384  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:08:58.088969  360443 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:08:58.093792  360443 start.go:128] duration metric: took 13.541816792s to createHost
	I1115 09:08:58.093822  360443 start.go:83] releasing machines lock for "addons-454747", held for 13.542012001s
	I1115 09:08:58.093926  360443 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-454747
	I1115 09:08:58.112306  360443 ssh_runner.go:195] Run: cat /version.json
	I1115 09:08:58.112360  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:58.112462  360443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:08:58.112560  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:08:58.131279  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:08:58.131849  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:08:58.278347  360443 ssh_runner.go:195] Run: systemctl --version
	I1115 09:08:58.284841  360443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:08:58.320500  360443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:08:58.325496  360443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:08:58.325565  360443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:08:58.353369  360443 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 09:08:58.353406  360443 start.go:496] detecting cgroup driver to use...
	I1115 09:08:58.353445  360443 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:08:58.353505  360443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:08:58.370580  360443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:08:58.382815  360443 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:08:58.382876  360443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:08:58.398955  360443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:08:58.416736  360443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:08:58.498408  360443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:08:58.587970  360443 docker.go:234] disabling docker service ...
	I1115 09:08:58.588042  360443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:08:58.607652  360443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:08:58.620908  360443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:08:58.707145  360443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:08:58.789765  360443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:08:58.802580  360443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:08:58.816215  360443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:08:58.816272  360443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:08:58.826818  360443 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:08:58.826882  360443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:08:58.835988  360443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:08:58.844958  360443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:08:58.853802  360443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:08:58.862273  360443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:08:58.871528  360443 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:08:58.885142  360443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:08:58.894331  360443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:08:58.902157  360443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:08:58.909632  360443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:08:58.988834  360443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:08:59.096088  360443 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:08:59.096176  360443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:08:59.100178  360443 start.go:564] Will wait 60s for crictl version
	I1115 09:08:59.100232  360443 ssh_runner.go:195] Run: which crictl
	I1115 09:08:59.103976  360443 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:08:59.128578  360443 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:08:59.128692  360443 ssh_runner.go:195] Run: crio --version
	I1115 09:08:59.157293  360443 ssh_runner.go:195] Run: crio --version
	I1115 09:08:59.188168  360443 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:08:59.189679  360443 cli_runner.go:164] Run: docker network inspect addons-454747 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:08:59.207620  360443 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:08:59.211932  360443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:08:59.222601  360443 kubeadm.go:884] updating cluster {Name:addons-454747 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-454747 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:08:59.222807  360443 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:08:59.222855  360443 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:08:59.254925  360443 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:08:59.254948  360443 crio.go:433] Images already preloaded, skipping extraction
	I1115 09:08:59.254995  360443 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:08:59.282354  360443 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:08:59.282385  360443 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:08:59.282408  360443 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 09:08:59.282514  360443 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-454747 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-454747 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:08:59.282603  360443 ssh_runner.go:195] Run: crio config
	I1115 09:08:59.329698  360443 cni.go:84] Creating CNI manager for ""
	I1115 09:08:59.329723  360443 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:08:59.329754  360443 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:08:59.329784  360443 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-454747 NodeName:addons-454747 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:08:59.329968  360443 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-454747"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:08:59.330048  360443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:08:59.338274  360443 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:08:59.338342  360443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:08:59.346278  360443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:08:59.359786  360443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:08:59.375261  360443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1115 09:08:59.388379  360443 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1115 09:08:59.392201  360443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:08:59.403078  360443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:08:59.483000  360443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:08:59.508041  360443 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747 for IP: 192.168.49.2
	I1115 09:08:59.508068  360443 certs.go:195] generating shared ca certs ...
	I1115 09:08:59.508087  360443 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:08:59.508231  360443 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:08:59.661356  360443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt ...
	I1115 09:08:59.661402  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt: {Name:mkf1de4e8a78ad57f64e4139f594a98d52310695 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:08:59.661592  360443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key ...
	I1115 09:08:59.661605  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key: {Name:mk31505a0317517b998de0b0f06cb2b6b31f4e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:08:59.661681  360443 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:08:59.734298  360443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt ...
	I1115 09:08:59.734324  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt: {Name:mk61320cad84fd3ba4ccac41f30e7dc5aecf90ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:08:59.734527  360443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key ...
	I1115 09:08:59.734549  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key: {Name:mke3e6a615bf275abcd57bdc4cb81bfd7c5e6f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:08:59.734648  360443 certs.go:257] generating profile certs ...
	I1115 09:08:59.734718  360443 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.key
	I1115 09:08:59.734732  360443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt with IP's: []
	I1115 09:09:00.089596  360443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt ...
	I1115 09:09:00.089627  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: {Name:mk3cd13bba85bc95005ef2728ab8d27051685829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:00.089805  360443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.key ...
	I1115 09:09:00.089818  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.key: {Name:mk90474fe0cb9333f9149c33a4f5fd0fe06dd9e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:00.089890  360443 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.key.5b973845
	I1115 09:09:00.089909  360443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.crt.5b973845 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1115 09:09:00.324938  360443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.crt.5b973845 ...
	I1115 09:09:00.324967  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.crt.5b973845: {Name:mkf21ac2d95f37eea0c922fbb7d554c2f3dd46e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:00.325129  360443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.key.5b973845 ...
	I1115 09:09:00.325142  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.key.5b973845: {Name:mk193a24957f0f76901390e7a684e487923039a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:00.325215  360443 certs.go:382] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.crt.5b973845 -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.crt
	I1115 09:09:00.325291  360443 certs.go:386] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.key.5b973845 -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.key
	I1115 09:09:00.325339  360443 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.key
	I1115 09:09:00.325357  360443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.crt with IP's: []
	I1115 09:09:00.820322  360443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.crt ...
	I1115 09:09:00.820356  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.crt: {Name:mk81ea4b79c506c3383e76f0970fe543f86962b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:00.820571  360443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.key ...
	I1115 09:09:00.820588  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.key: {Name:mk881c7f1bb83460acc56df5bfc62da91bb98187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:00.820762  360443 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:09:00.820797  360443 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:09:00.820821  360443 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:09:00.820842  360443 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:09:00.821526  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:09:00.840316  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:09:00.858018  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:09:00.875581  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:09:00.892959  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 09:09:00.911040  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:09:00.929789  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:09:00.948060  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 09:09:00.966492  360443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:09:00.985825  360443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:09:00.998726  360443 ssh_runner.go:195] Run: openssl version
	I1115 09:09:01.005602  360443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:09:01.017630  360443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:09:01.021880  360443 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:09:01.021951  360443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:09:01.059536  360443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:09:01.069995  360443 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:09:01.073709  360443 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:09:01.073769  360443 kubeadm.go:401] StartCluster: {Name:addons-454747 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-454747 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:09:01.073853  360443 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:09:01.073901  360443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:09:01.102042  360443 cri.go:89] found id: ""
	I1115 09:09:01.102118  360443 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:09:01.110701  360443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:09:01.118940  360443 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 09:09:01.119043  360443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:09:01.127032  360443 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 09:09:01.127050  360443 kubeadm.go:158] found existing configuration files:
	
	I1115 09:09:01.127100  360443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 09:09:01.134997  360443 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 09:09:01.135067  360443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 09:09:01.142873  360443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 09:09:01.150916  360443 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 09:09:01.150972  360443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:09:01.158666  360443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 09:09:01.166665  360443 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 09:09:01.166739  360443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:09:01.174611  360443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 09:09:01.183030  360443 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 09:09:01.183089  360443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:09:01.191356  360443 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 09:09:01.250110  360443 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 09:09:01.308835  360443 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 09:09:11.413706  360443 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 09:09:11.413773  360443 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 09:09:11.413890  360443 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 09:09:11.413993  360443 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 09:09:11.414066  360443 kubeadm.go:319] OS: Linux
	I1115 09:09:11.414142  360443 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 09:09:11.414213  360443 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 09:09:11.414284  360443 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 09:09:11.414360  360443 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 09:09:11.414449  360443 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 09:09:11.414523  360443 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 09:09:11.414600  360443 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 09:09:11.414670  360443 kubeadm.go:319] CGROUPS_IO: enabled
	I1115 09:09:11.414790  360443 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 09:09:11.414902  360443 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 09:09:11.415009  360443 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 09:09:11.415114  360443 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 09:09:11.417550  360443 out.go:252]   - Generating certificates and keys ...
	I1115 09:09:11.417634  360443 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 09:09:11.417728  360443 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 09:09:11.417809  360443 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 09:09:11.417889  360443 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 09:09:11.417958  360443 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 09:09:11.418028  360443 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 09:09:11.418105  360443 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 09:09:11.418343  360443 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-454747 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 09:09:11.418454  360443 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 09:09:11.418557  360443 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-454747 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1115 09:09:11.418636  360443 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 09:09:11.418713  360443 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 09:09:11.418779  360443 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 09:09:11.418861  360443 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 09:09:11.418904  360443 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 09:09:11.418953  360443 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 09:09:11.419003  360443 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 09:09:11.419070  360443 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 09:09:11.419140  360443 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 09:09:11.419230  360443 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 09:09:11.419322  360443 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 09:09:11.420630  360443 out.go:252]   - Booting up control plane ...
	I1115 09:09:11.420715  360443 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 09:09:11.420800  360443 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 09:09:11.420861  360443 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 09:09:11.420965  360443 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 09:09:11.421058  360443 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 09:09:11.421173  360443 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 09:09:11.421294  360443 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 09:09:11.421333  360443 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 09:09:11.421465  360443 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 09:09:11.421621  360443 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 09:09:11.421704  360443 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000963177s
	I1115 09:09:11.421821  360443 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 09:09:11.421928  360443 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1115 09:09:11.422031  360443 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 09:09:11.422133  360443 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 09:09:11.422202  360443 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.559175368s
	I1115 09:09:11.422259  360443 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.241410643s
	I1115 09:09:11.422324  360443 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00226888s
	I1115 09:09:11.422443  360443 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 09:09:11.422596  360443 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 09:09:11.422670  360443 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 09:09:11.422837  360443 kubeadm.go:319] [mark-control-plane] Marking the node addons-454747 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 09:09:11.422911  360443 kubeadm.go:319] [bootstrap-token] Using token: iog1xk.8n83pbeopade97db
	I1115 09:09:11.424318  360443 out.go:252]   - Configuring RBAC rules ...
	I1115 09:09:11.424454  360443 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 09:09:11.424557  360443 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 09:09:11.424714  360443 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 09:09:11.424838  360443 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 09:09:11.424940  360443 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 09:09:11.425022  360443 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 09:09:11.425183  360443 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 09:09:11.425247  360443 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 09:09:11.425321  360443 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 09:09:11.425333  360443 kubeadm.go:319] 
	I1115 09:09:11.425447  360443 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 09:09:11.425457  360443 kubeadm.go:319] 
	I1115 09:09:11.425568  360443 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 09:09:11.425576  360443 kubeadm.go:319] 
	I1115 09:09:11.425597  360443 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 09:09:11.425648  360443 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 09:09:11.425700  360443 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 09:09:11.425713  360443 kubeadm.go:319] 
	I1115 09:09:11.425794  360443 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 09:09:11.425803  360443 kubeadm.go:319] 
	I1115 09:09:11.425875  360443 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 09:09:11.425883  360443 kubeadm.go:319] 
	I1115 09:09:11.425958  360443 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 09:09:11.426063  360443 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 09:09:11.426140  360443 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 09:09:11.426148  360443 kubeadm.go:319] 
	I1115 09:09:11.426220  360443 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 09:09:11.426287  360443 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 09:09:11.426292  360443 kubeadm.go:319] 
	I1115 09:09:11.426357  360443 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token iog1xk.8n83pbeopade97db \
	I1115 09:09:11.426472  360443 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac \
	I1115 09:09:11.426501  360443 kubeadm.go:319] 	--control-plane 
	I1115 09:09:11.426507  360443 kubeadm.go:319] 
	I1115 09:09:11.426592  360443 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 09:09:11.426604  360443 kubeadm.go:319] 
	I1115 09:09:11.426673  360443 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token iog1xk.8n83pbeopade97db \
	I1115 09:09:11.426779  360443 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac 
	I1115 09:09:11.426789  360443 cni.go:84] Creating CNI manager for ""
	I1115 09:09:11.426795  360443 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:09:11.428292  360443 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 09:09:11.429459  360443 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 09:09:11.433869  360443 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 09:09:11.433890  360443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 09:09:11.446740  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 09:09:11.647335  360443 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 09:09:11.647445  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:11.647455  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-454747 minikube.k8s.io/updated_at=2025_11_15T09_09_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=addons-454747 minikube.k8s.io/primary=true
	I1115 09:09:11.660140  360443 ops.go:34] apiserver oom_adj: -16
	I1115 09:09:11.723736  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:12.224188  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:12.724492  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:13.223989  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:13.723801  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:14.224584  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:14.724113  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:15.224164  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:15.724243  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:16.224450  360443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:09:16.289205  360443 kubeadm.go:1114] duration metric: took 4.641853418s to wait for elevateKubeSystemPrivileges
	I1115 09:09:16.289238  360443 kubeadm.go:403] duration metric: took 15.215474747s to StartCluster
	I1115 09:09:16.289259  360443 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:16.289409  360443 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:09:16.289938  360443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:09:16.290164  360443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 09:09:16.290180  360443 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:09:16.290251  360443 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1115 09:09:16.290468  360443 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-454747"
	I1115 09:09:16.290491  360443 addons.go:70] Setting default-storageclass=true in profile "addons-454747"
	I1115 09:09:16.290502  360443 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:09:16.290516  360443 addons.go:70] Setting registry=true in profile "addons-454747"
	I1115 09:09:16.290537  360443 addons.go:70] Setting registry-creds=true in profile "addons-454747"
	I1115 09:09:16.290539  360443 addons.go:70] Setting storage-provisioner=true in profile "addons-454747"
	I1115 09:09:16.290508  360443 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-454747"
	I1115 09:09:16.290558  360443 addons.go:239] Setting addon storage-provisioner=true in "addons-454747"
	I1115 09:09:16.290558  360443 addons.go:70] Setting volcano=true in profile "addons-454747"
	I1115 09:09:16.290565  360443 addons.go:70] Setting gcp-auth=true in profile "addons-454747"
	I1115 09:09:16.290500  360443 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-454747"
	I1115 09:09:16.290573  360443 addons.go:239] Setting addon volcano=true in "addons-454747"
	I1115 09:09:16.290574  360443 addons.go:239] Setting addon registry-creds=true in "addons-454747"
	I1115 09:09:16.290580  360443 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-454747"
	I1115 09:09:16.290589  360443 addons.go:70] Setting ingress=true in profile "addons-454747"
	I1115 09:09:16.290603  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.290538  360443 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-454747"
	I1115 09:09:16.290608  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.290611  360443 addons.go:239] Setting addon ingress=true in "addons-454747"
	I1115 09:09:16.290545  360443 addons.go:70] Setting cloud-spanner=true in profile "addons-454747"
	I1115 09:09:16.290624  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.290628  360443 addons.go:239] Setting addon cloud-spanner=true in "addons-454747"
	I1115 09:09:16.290633  360443 addons.go:70] Setting metrics-server=true in profile "addons-454747"
	I1115 09:09:16.290641  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.290646  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.290648  360443 addons.go:239] Setting addon metrics-server=true in "addons-454747"
	I1115 09:09:16.290689  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.290578  360443 addons.go:70] Setting ingress-dns=true in profile "addons-454747"
	I1115 09:09:16.291001  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.291008  360443 addons.go:239] Setting addon ingress-dns=true in "addons-454747"
	I1115 09:09:16.291050  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.291199  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.291236  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.291236  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.291278  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.291537  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.291800  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.291912  360443 addons.go:70] Setting volumesnapshots=true in profile "addons-454747"
	I1115 09:09:16.291929  360443 addons.go:239] Setting addon volumesnapshots=true in "addons-454747"
	I1115 09:09:16.291954  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.292477  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.290531  360443 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-454747"
	I1115 09:09:16.293712  360443 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-454747"
	I1115 09:09:16.293741  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.294222  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.290603  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.290604  360443 mustload.go:66] Loading cluster: addons-454747
	I1115 09:09:16.290528  360443 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-454747"
	I1115 09:09:16.295243  360443 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-454747"
	I1115 09:09:16.295292  360443 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:09:16.295557  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.295595  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.295964  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.290567  360443 addons.go:239] Setting addon registry=true in "addons-454747"
	I1115 09:09:16.298496  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.298576  360443 out.go:179] * Verifying Kubernetes components...
	I1115 09:09:16.290624  360443 addons.go:70] Setting inspektor-gadget=true in profile "addons-454747"
	I1115 09:09:16.298713  360443 addons.go:239] Setting addon inspektor-gadget=true in "addons-454747"
	I1115 09:09:16.290603  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.299341  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.290477  360443 addons.go:70] Setting yakd=true in profile "addons-454747"
	I1115 09:09:16.299668  360443 addons.go:239] Setting addon yakd=true in "addons-454747"
	I1115 09:09:16.299711  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.291807  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.300374  360443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:09:16.301663  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.307509  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.310000  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.311251  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.353959  360443 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1115 09:09:16.355245  360443 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:09:16.355268  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1115 09:09:16.355329  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.355562  360443 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1115 09:09:16.358747  360443 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1115 09:09:16.358767  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1115 09:09:16.358968  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.362868  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1115 09:09:16.364073  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1115 09:09:16.364831  360443 addons.go:239] Setting addon default-storageclass=true in "addons-454747"
	I1115 09:09:16.364901  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.365461  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.366324  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1115 09:09:16.370207  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1115 09:09:16.371795  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1115 09:09:16.372943  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.374537  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1115 09:09:16.380792  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1115 09:09:16.381898  360443 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1115 09:09:16.381918  360443 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1115 09:09:16.381996  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.387068  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1115 09:09:16.390062  360443 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1115 09:09:16.392205  360443 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1115 09:09:16.392753  360443 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1115 09:09:16.393048  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.403657  360443 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-454747"
	I1115 09:09:16.403707  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:16.404923  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:16.406494  360443 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1115 09:09:16.407727  360443 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1115 09:09:16.409791  360443 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1115 09:09:16.409956  360443 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1115 09:09:16.410003  360443 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1115 09:09:16.410096  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.412058  360443 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:09:16.412080  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1115 09:09:16.412138  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.412494  360443 out.go:179]   - Using image docker.io/registry:3.0.0
	W1115 09:09:16.413237  360443 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1115 09:09:16.413862  360443 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1115 09:09:16.413879  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1115 09:09:16.413942  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.416108  360443 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1115 09:09:16.417499  360443 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:09:16.417582  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1115 09:09:16.417756  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.421523  360443 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:09:16.424477  360443 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1115 09:09:16.425363  360443 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:09:16.425385  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 09:09:16.425458  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.426076  360443 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1115 09:09:16.426104  360443 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1115 09:09:16.426159  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.426684  360443 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:09:16.427972  360443 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:09:16.429423  360443 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1115 09:09:16.431532  360443 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:09:16.432343  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1115 09:09:16.433490  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.433783  360443 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1115 09:09:16.433486  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.435011  360443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 09:09:16.435184  360443 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:09:16.435218  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1115 09:09:16.435288  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.451568  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.457927  360443 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1115 09:09:16.459351  360443 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:09:16.460421  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1115 09:09:16.460521  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.461563  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.463997  360443 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 09:09:16.464019  360443 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 09:09:16.464076  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.479087  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.494630  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.514420  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.517754  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.521186  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.521777  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.523485  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.536115  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.536902  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.538819  360443 out.go:179]   - Using image docker.io/busybox:stable
	W1115 09:09:16.539893  360443 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1115 09:09:16.539931  360443 retry.go:31] will retry after 234.836428ms: ssh: handshake failed: EOF
	I1115 09:09:16.541409  360443 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1115 09:09:16.543522  360443 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:09:16.543570  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1115 09:09:16.543636  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:16.546435  360443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:09:16.547816  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.551456  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	W1115 09:09:16.556117  360443 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1115 09:09:16.556358  360443 retry.go:31] will retry after 256.485753ms: ssh: handshake failed: EOF
	I1115 09:09:16.583654  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:16.649898  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1115 09:09:16.652465  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1115 09:09:16.656195  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1115 09:09:16.666894  360443 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1115 09:09:16.666931  360443 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1115 09:09:16.676158  360443 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1115 09:09:16.676190  360443 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1115 09:09:16.687672  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1115 09:09:16.687673  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:09:16.687843  360443 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1115 09:09:16.687856  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1115 09:09:16.707157  360443 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1115 09:09:16.707186  360443 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1115 09:09:16.715512  360443 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:09:16.715643  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1115 09:09:16.721169  360443 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1115 09:09:16.721194  360443 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1115 09:09:16.724644  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1115 09:09:16.730790  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1115 09:09:16.735701  360443 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1115 09:09:16.735773  360443 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1115 09:09:16.745889  360443 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1115 09:09:16.745919  360443 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1115 09:09:16.774846  360443 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1115 09:09:16.774881  360443 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1115 09:09:16.781815  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1115 09:09:16.791146  360443 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:09:16.791175  360443 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1115 09:09:16.793625  360443 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1115 09:09:16.793654  360443 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1115 09:09:16.794500  360443 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1115 09:09:16.794535  360443 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1115 09:09:16.801478  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1115 09:09:16.808935  360443 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1115 09:09:16.808962  360443 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1115 09:09:16.810041  360443 node_ready.go:35] waiting up to 6m0s for node "addons-454747" to be "Ready" ...
	I1115 09:09:16.811113  360443 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1115 09:09:16.838552  360443 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1115 09:09:16.838814  360443 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1115 09:09:16.840980  360443 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1115 09:09:16.841080  360443 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1115 09:09:16.867156  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1115 09:09:16.881654  360443 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:09:16.881745  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1115 09:09:16.902724  360443 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:09:16.902812  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1115 09:09:16.911088  360443 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1115 09:09:16.911121  360443 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1115 09:09:16.959750  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1115 09:09:16.960890  360443 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1115 09:09:16.960919  360443 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1115 09:09:16.992997  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:09:17.011847  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 09:09:17.021872  360443 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1115 09:09:17.021900  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1115 09:09:17.090886  360443 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1115 09:09:17.090917  360443 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1115 09:09:17.105211  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1115 09:09:17.178613  360443 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1115 09:09:17.178647  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1115 09:09:17.212176  360443 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1115 09:09:17.212208  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1115 09:09:17.265738  360443 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 09:09:17.265770  360443 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1115 09:09:17.292635  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1115 09:09:17.314409  360443 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-454747" context rescaled to 1 replicas
	I1115 09:09:17.998480  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.216618688s)
	I1115 09:09:17.998551  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.197027334s)
	I1115 09:09:17.998584  360443 addons.go:480] Verifying addon registry=true in "addons-454747"
	I1115 09:09:17.998615  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.131354939s)
	I1115 09:09:17.998641  360443 addons.go:480] Verifying addon metrics-server=true in "addons-454747"
	I1115 09:09:17.998783  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.03895073s)
	I1115 09:09:17.998935  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.268113693s)
	I1115 09:09:17.998956  360443 addons.go:480] Verifying addon ingress=true in "addons-454747"
	I1115 09:09:18.000467  360443 out.go:179] * Verifying registry addon...
	I1115 09:09:18.000494  360443 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-454747 service yakd-dashboard -n yakd-dashboard
	
	I1115 09:09:18.001203  360443 out.go:179] * Verifying ingress addon...
	I1115 09:09:18.002931  360443 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1115 09:09:18.004140  360443 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1115 09:09:18.006211  360443 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 09:09:18.006235  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:18.006913  360443 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1115 09:09:18.006928  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:18.309304  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.316258189s)
	I1115 09:09:18.309352  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.297468733s)
	W1115 09:09:18.309369  360443 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:09:18.309414  360443 retry.go:31] will retry after 131.526232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1115 09:09:18.309444  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.204195325s)
	I1115 09:09:18.309643  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.016970353s)
	I1115 09:09:18.309673  360443 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-454747"
	I1115 09:09:18.311138  360443 out.go:179] * Verifying csi-hostpath-driver addon...
	I1115 09:09:18.313824  360443 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1115 09:09:18.316100  360443 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 09:09:18.316119  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:18.441679  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1115 09:09:18.506033  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:18.506667  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1115 09:09:18.813605  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:18.817053  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:19.007068  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:19.007262  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:19.317557  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:19.506984  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:19.507307  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:19.816404  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:20.006947  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:20.007115  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:20.317173  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:20.506594  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:20.507047  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:20.817110  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:20.936954  360443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.495231549s)
	I1115 09:09:21.006714  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:21.008737  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1115 09:09:21.313561  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:21.316891  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:21.506867  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:21.507110  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:21.817023  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:22.006692  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:22.007015  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:22.317039  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:22.507264  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:22.507535  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:22.817760  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:23.006282  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:23.006691  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:23.316977  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:23.506252  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:23.506764  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1115 09:09:23.813750  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:23.817224  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:23.980247  360443 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1115 09:09:23.980313  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:23.998193  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:24.007171  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:24.007805  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:24.099375  360443 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1115 09:09:24.112379  360443 addons.go:239] Setting addon gcp-auth=true in "addons-454747"
	I1115 09:09:24.112499  360443 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:09:24.113001  360443 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:09:24.130876  360443 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1115 09:09:24.130925  360443 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:09:24.148278  360443 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:09:24.240555  360443 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1115 09:09:24.242151  360443 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1115 09:09:24.243254  360443 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1115 09:09:24.243270  360443 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1115 09:09:24.257318  360443 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1115 09:09:24.257348  360443 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1115 09:09:24.271231  360443 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:09:24.271259  360443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1115 09:09:24.284687  360443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1115 09:09:24.317030  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:24.506197  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:24.506897  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:24.602017  360443 addons.go:480] Verifying addon gcp-auth=true in "addons-454747"
	I1115 09:09:24.603728  360443 out.go:179] * Verifying gcp-auth addon...
	I1115 09:09:24.605930  360443 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1115 09:09:24.608187  360443 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1115 09:09:24.608202  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:24.816317  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:25.006092  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:25.006836  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:25.109419  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:25.316317  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:25.506136  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:25.506853  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:25.609558  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:25.816808  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:26.007042  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:26.007151  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:26.109074  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:26.312995  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:26.316366  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:26.506586  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:26.507452  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:26.609249  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:26.816438  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:27.006650  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:27.007402  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:27.109449  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:27.316798  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:27.507159  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:27.507166  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:27.609676  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:27.817178  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:28.006320  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:28.006934  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:28.109686  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:28.313471  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:28.316852  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:28.507200  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:28.507387  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:28.609191  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:28.816433  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:29.006434  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:29.007238  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:29.109285  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:29.317121  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:29.506257  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:29.506889  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:29.609664  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:29.817072  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:30.006233  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:30.006690  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:30.109743  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:30.313966  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:30.316327  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:30.506568  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:30.507071  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:30.609311  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:30.816653  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:31.006905  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:31.007143  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:31.109990  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:31.316476  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:31.507979  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:31.508052  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:31.609040  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:31.816945  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:32.005951  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:32.006803  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:32.109741  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:32.317339  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:32.506349  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:32.506976  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:32.608801  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:32.813610  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:32.816916  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:33.005858  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:33.007471  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:33.109286  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:33.316344  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:33.506032  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:33.507030  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:33.608735  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:33.816598  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:34.006410  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:34.007114  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:34.108824  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:34.317002  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:34.505787  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:34.506565  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:34.609187  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:34.816919  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:35.005724  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:35.007536  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:35.109705  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:35.313492  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:35.316630  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:35.506838  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:35.507235  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:35.609436  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:35.816872  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:36.006771  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:36.006850  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:36.109582  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:36.316773  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:36.506288  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:36.506743  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:36.609712  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:36.816904  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:37.005767  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:37.007367  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:37.109190  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:37.316085  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:37.506076  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:37.506924  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:37.609629  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:37.813593  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:37.816903  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:38.006769  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:38.006970  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:38.109628  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:38.316734  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:38.508960  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:38.509132  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:38.608643  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:38.816937  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:39.005923  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:39.007645  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:39.109315  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:39.316440  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:39.506504  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:39.507250  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:39.609032  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:39.817119  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:40.005832  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:40.006753  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:40.109611  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:40.313455  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:40.316833  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:40.506977  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:40.506993  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:40.608951  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:40.816106  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:41.005820  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:41.006585  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:41.109572  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:41.317530  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:41.506820  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:41.507223  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:41.609428  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:41.817141  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:42.005880  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:42.006691  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:42.109665  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:42.313531  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:42.316656  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:42.506907  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:42.507439  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:42.609111  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:42.816561  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:43.006517  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:43.007217  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:43.109338  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:43.316288  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:43.506369  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:43.506989  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:43.609789  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:43.816718  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:44.006580  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:44.007628  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:44.109369  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:44.316273  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:44.506182  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:44.507103  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:44.608504  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:44.813182  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:44.816191  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:45.006135  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:45.006959  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:45.109523  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:45.316517  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:45.506387  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:45.507235  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:45.609159  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:45.816929  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:46.005866  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:46.006326  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:46.109005  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:46.316378  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:46.506274  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:46.507205  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:46.609293  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:46.817013  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:47.005906  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:47.006792  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:47.109560  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:47.313360  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:47.316420  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:47.506518  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:47.506990  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:47.609586  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:47.816594  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:48.006387  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:48.007196  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:48.109087  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:48.316619  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:48.505935  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:48.506921  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:48.609853  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:48.816441  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:49.006331  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:49.007261  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:49.109468  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:49.316423  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:49.506278  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:49.506933  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:49.609581  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:49.812998  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:49.816254  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:50.006117  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:50.006983  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:50.108841  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:50.316918  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:50.505963  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:50.507783  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:50.609752  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:50.816874  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:51.005647  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:51.007362  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:51.109090  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:51.316847  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:51.507085  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:51.507252  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:51.608900  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:51.813711  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:51.816920  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:52.006037  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:52.006707  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:52.109595  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:52.317120  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:52.505897  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:52.506868  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:52.609551  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:52.816550  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:53.006554  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:53.007165  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:53.108796  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:53.317050  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:53.506198  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:53.506996  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:53.609577  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:53.816361  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:54.006424  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:54.007035  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:54.109902  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:54.313816  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:54.317104  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:54.506427  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:54.506844  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:54.610043  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:54.816129  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:55.006150  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:55.007073  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:55.108755  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:55.316868  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:55.505842  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:55.507299  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:55.609593  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:55.816813  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:56.006676  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:56.006877  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:56.109700  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:56.316756  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:56.505862  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:56.507657  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:56.609564  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1115 09:09:56.813409  360443 node_ready.go:57] node "addons-454747" has "Ready":"False" status (will retry)
	I1115 09:09:56.816565  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:57.006606  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:57.007363  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:57.109248  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:57.316387  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:57.506626  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:57.507039  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:57.608877  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:57.813101  360443 node_ready.go:49] node "addons-454747" is "Ready"
	I1115 09:09:57.813140  360443 node_ready.go:38] duration metric: took 41.003062283s for node "addons-454747" to be "Ready" ...
	I1115 09:09:57.813160  360443 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:09:57.813243  360443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:09:57.817316  360443 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1115 09:09:57.817345  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:57.830296  360443 api_server.go:72] duration metric: took 41.540079431s to wait for apiserver process to appear ...
	I1115 09:09:57.830324  360443 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:09:57.830353  360443 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 09:09:57.834714  360443 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 09:09:57.835754  360443 api_server.go:141] control plane version: v1.34.1
	I1115 09:09:57.835785  360443 api_server.go:131] duration metric: took 5.452451ms to wait for apiserver health ...
	I1115 09:09:57.835798  360443 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:09:57.841339  360443 system_pods.go:59] 20 kube-system pods found
	I1115 09:09:57.841379  360443 system_pods.go:61] "amd-gpu-device-plugin-z8k7m" [8cc4171b-54ae-4353-9ac3-b8f4de94b486] Pending
	I1115 09:09:57.841432  360443 system_pods.go:61] "coredns-66bc5c9577-cjxcs" [5e1520e6-262d-4791-8a6c-02723fd2722f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:09:57.841449  360443 system_pods.go:61] "csi-hostpath-attacher-0" [6698b44f-d001-4c25-b60f-09940dcb56c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:09:57.841462  360443 system_pods.go:61] "csi-hostpath-resizer-0" [875fe603-0fa1-4bee-b391-4ae10fe0542a] Pending
	I1115 09:09:57.841468  360443 system_pods.go:61] "csi-hostpathplugin-zkcmq" [ce167230-ac85-431a-acf8-3a672b1aa5ba] Pending
	I1115 09:09:57.841476  360443 system_pods.go:61] "etcd-addons-454747" [d0759de5-4799-4c33-82cb-2e3031947785] Running
	I1115 09:09:57.841480  360443 system_pods.go:61] "kindnet-wq26q" [11f8d927-49fd-4232-8c9f-96bccb76673a] Running
	I1115 09:09:57.841486  360443 system_pods.go:61] "kube-apiserver-addons-454747" [d7bf8535-2d7a-40fa-a045-1f51fe7e98f5] Running
	I1115 09:09:57.841494  360443 system_pods.go:61] "kube-controller-manager-addons-454747" [99633a87-dd53-4d17-a16c-319c7424f0db] Running
	I1115 09:09:57.841503  360443 system_pods.go:61] "kube-ingress-dns-minikube" [c7585e9f-c4af-4c2a-af6b-13c2612f3939] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:09:57.841512  360443 system_pods.go:61] "kube-proxy-jlh5q" [9e8210a5-1357-4e4a-902a-93a4801e0509] Running
	I1115 09:09:57.841517  360443 system_pods.go:61] "kube-scheduler-addons-454747" [b2b440de-ce6f-4202-aec3-7b2c9a9e5b60] Running
	I1115 09:09:57.841529  360443 system_pods.go:61] "metrics-server-85b7d694d7-m85dj" [0cd080a3-9d1c-497f-8366-db37bda2a923] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:09:57.841538  360443 system_pods.go:61] "nvidia-device-plugin-daemonset-58w8g" [074fe19e-299a-47d4-b11d-39059b797509] Pending
	I1115 09:09:57.841546  360443 system_pods.go:61] "registry-6b586f9694-mqjdw" [7ed7e9cf-6050-4f40-b957-f78707890861] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:09:57.841554  360443 system_pods.go:61] "registry-creds-764b6fb674-gckbr" [799c7fb7-4643-4a6c-ad1f-e02d10f99902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:09:57.841564  360443 system_pods.go:61] "registry-proxy-pspnm" [4fe4b793-40d0-4349-955b-fce89850d82b] Pending
	I1115 09:09:57.841570  360443 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nwkcn" [366e261d-64fb-4867-a32c-9e4a4b404a31] Pending
	I1115 09:09:57.841575  360443 system_pods.go:61] "snapshot-controller-7d9fbc56b8-t9lwf" [4eb66a49-c31b-4612-bb18-66f0769762fe] Pending
	I1115 09:09:57.841583  360443 system_pods.go:61] "storage-provisioner" [1b40db86-a278-4988-8866-14d72b2d608a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:09:57.841592  360443 system_pods.go:74] duration metric: took 5.786396ms to wait for pod list to return data ...
	I1115 09:09:57.841603  360443 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:09:57.844315  360443 default_sa.go:45] found service account: "default"
	I1115 09:09:57.844338  360443 default_sa.go:55] duration metric: took 2.726797ms for default service account to be created ...
	I1115 09:09:57.844349  360443 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:09:57.847705  360443 system_pods.go:86] 20 kube-system pods found
	I1115 09:09:57.847734  360443 system_pods.go:89] "amd-gpu-device-plugin-z8k7m" [8cc4171b-54ae-4353-9ac3-b8f4de94b486] Pending
	I1115 09:09:57.847751  360443 system_pods.go:89] "coredns-66bc5c9577-cjxcs" [5e1520e6-262d-4791-8a6c-02723fd2722f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:09:57.847760  360443 system_pods.go:89] "csi-hostpath-attacher-0" [6698b44f-d001-4c25-b60f-09940dcb56c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:09:57.847769  360443 system_pods.go:89] "csi-hostpath-resizer-0" [875fe603-0fa1-4bee-b391-4ae10fe0542a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:09:57.847779  360443 system_pods.go:89] "csi-hostpathplugin-zkcmq" [ce167230-ac85-431a-acf8-3a672b1aa5ba] Pending
	I1115 09:09:57.847785  360443 system_pods.go:89] "etcd-addons-454747" [d0759de5-4799-4c33-82cb-2e3031947785] Running
	I1115 09:09:57.847791  360443 system_pods.go:89] "kindnet-wq26q" [11f8d927-49fd-4232-8c9f-96bccb76673a] Running
	I1115 09:09:57.847800  360443 system_pods.go:89] "kube-apiserver-addons-454747" [d7bf8535-2d7a-40fa-a045-1f51fe7e98f5] Running
	I1115 09:09:57.847805  360443 system_pods.go:89] "kube-controller-manager-addons-454747" [99633a87-dd53-4d17-a16c-319c7424f0db] Running
	I1115 09:09:57.847816  360443 system_pods.go:89] "kube-ingress-dns-minikube" [c7585e9f-c4af-4c2a-af6b-13c2612f3939] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:09:57.847825  360443 system_pods.go:89] "kube-proxy-jlh5q" [9e8210a5-1357-4e4a-902a-93a4801e0509] Running
	I1115 09:09:57.847831  360443 system_pods.go:89] "kube-scheduler-addons-454747" [b2b440de-ce6f-4202-aec3-7b2c9a9e5b60] Running
	I1115 09:09:57.847840  360443 system_pods.go:89] "metrics-server-85b7d694d7-m85dj" [0cd080a3-9d1c-497f-8366-db37bda2a923] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:09:57.847845  360443 system_pods.go:89] "nvidia-device-plugin-daemonset-58w8g" [074fe19e-299a-47d4-b11d-39059b797509] Pending
	I1115 09:09:57.847852  360443 system_pods.go:89] "registry-6b586f9694-mqjdw" [7ed7e9cf-6050-4f40-b957-f78707890861] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:09:57.847864  360443 system_pods.go:89] "registry-creds-764b6fb674-gckbr" [799c7fb7-4643-4a6c-ad1f-e02d10f99902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:09:57.847871  360443 system_pods.go:89] "registry-proxy-pspnm" [4fe4b793-40d0-4349-955b-fce89850d82b] Pending
	I1115 09:09:57.847880  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nwkcn" [366e261d-64fb-4867-a32c-9e4a4b404a31] Pending
	I1115 09:09:57.847887  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t9lwf" [4eb66a49-c31b-4612-bb18-66f0769762fe] Pending
	I1115 09:09:57.847897  360443 system_pods.go:89] "storage-provisioner" [1b40db86-a278-4988-8866-14d72b2d608a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:09:57.847918  360443 retry.go:31] will retry after 225.081309ms: missing components: kube-dns
	I1115 09:09:58.005901  360443 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1115 09:09:58.005926  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:58.006898  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:58.078759  360443 system_pods.go:86] 20 kube-system pods found
	I1115 09:09:58.078806  360443 system_pods.go:89] "amd-gpu-device-plugin-z8k7m" [8cc4171b-54ae-4353-9ac3-b8f4de94b486] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:09:58.078820  360443 system_pods.go:89] "coredns-66bc5c9577-cjxcs" [5e1520e6-262d-4791-8a6c-02723fd2722f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:09:58.078830  360443 system_pods.go:89] "csi-hostpath-attacher-0" [6698b44f-d001-4c25-b60f-09940dcb56c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:09:58.078839  360443 system_pods.go:89] "csi-hostpath-resizer-0" [875fe603-0fa1-4bee-b391-4ae10fe0542a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:09:58.078854  360443 system_pods.go:89] "csi-hostpathplugin-zkcmq" [ce167230-ac85-431a-acf8-3a672b1aa5ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:09:58.078862  360443 system_pods.go:89] "etcd-addons-454747" [d0759de5-4799-4c33-82cb-2e3031947785] Running
	I1115 09:09:58.078869  360443 system_pods.go:89] "kindnet-wq26q" [11f8d927-49fd-4232-8c9f-96bccb76673a] Running
	I1115 09:09:58.078874  360443 system_pods.go:89] "kube-apiserver-addons-454747" [d7bf8535-2d7a-40fa-a045-1f51fe7e98f5] Running
	I1115 09:09:58.078880  360443 system_pods.go:89] "kube-controller-manager-addons-454747" [99633a87-dd53-4d17-a16c-319c7424f0db] Running
	I1115 09:09:58.078888  360443 system_pods.go:89] "kube-ingress-dns-minikube" [c7585e9f-c4af-4c2a-af6b-13c2612f3939] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:09:58.078903  360443 system_pods.go:89] "kube-proxy-jlh5q" [9e8210a5-1357-4e4a-902a-93a4801e0509] Running
	I1115 09:09:58.078910  360443 system_pods.go:89] "kube-scheduler-addons-454747" [b2b440de-ce6f-4202-aec3-7b2c9a9e5b60] Running
	I1115 09:09:58.078917  360443 system_pods.go:89] "metrics-server-85b7d694d7-m85dj" [0cd080a3-9d1c-497f-8366-db37bda2a923] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:09:58.078931  360443 system_pods.go:89] "nvidia-device-plugin-daemonset-58w8g" [074fe19e-299a-47d4-b11d-39059b797509] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:09:58.078942  360443 system_pods.go:89] "registry-6b586f9694-mqjdw" [7ed7e9cf-6050-4f40-b957-f78707890861] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:09:58.078956  360443 system_pods.go:89] "registry-creds-764b6fb674-gckbr" [799c7fb7-4643-4a6c-ad1f-e02d10f99902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:09:58.078969  360443 system_pods.go:89] "registry-proxy-pspnm" [4fe4b793-40d0-4349-955b-fce89850d82b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:09:58.078984  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nwkcn" [366e261d-64fb-4867-a32c-9e4a4b404a31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:09:58.078996  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t9lwf" [4eb66a49-c31b-4612-bb18-66f0769762fe] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:09:58.079083  360443 system_pods.go:89] "storage-provisioner" [1b40db86-a278-4988-8866-14d72b2d608a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:09:58.079110  360443 retry.go:31] will retry after 313.960058ms: missing components: kube-dns
	I1115 09:09:58.177164  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:58.317589  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:58.397501  360443 system_pods.go:86] 20 kube-system pods found
	I1115 09:09:58.397542  360443 system_pods.go:89] "amd-gpu-device-plugin-z8k7m" [8cc4171b-54ae-4353-9ac3-b8f4de94b486] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:09:58.397553  360443 system_pods.go:89] "coredns-66bc5c9577-cjxcs" [5e1520e6-262d-4791-8a6c-02723fd2722f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:09:58.397561  360443 system_pods.go:89] "csi-hostpath-attacher-0" [6698b44f-d001-4c25-b60f-09940dcb56c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:09:58.397568  360443 system_pods.go:89] "csi-hostpath-resizer-0" [875fe603-0fa1-4bee-b391-4ae10fe0542a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:09:58.397577  360443 system_pods.go:89] "csi-hostpathplugin-zkcmq" [ce167230-ac85-431a-acf8-3a672b1aa5ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:09:58.397581  360443 system_pods.go:89] "etcd-addons-454747" [d0759de5-4799-4c33-82cb-2e3031947785] Running
	I1115 09:09:58.397586  360443 system_pods.go:89] "kindnet-wq26q" [11f8d927-49fd-4232-8c9f-96bccb76673a] Running
	I1115 09:09:58.397589  360443 system_pods.go:89] "kube-apiserver-addons-454747" [d7bf8535-2d7a-40fa-a045-1f51fe7e98f5] Running
	I1115 09:09:58.397593  360443 system_pods.go:89] "kube-controller-manager-addons-454747" [99633a87-dd53-4d17-a16c-319c7424f0db] Running
	I1115 09:09:58.397598  360443 system_pods.go:89] "kube-ingress-dns-minikube" [c7585e9f-c4af-4c2a-af6b-13c2612f3939] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:09:58.397604  360443 system_pods.go:89] "kube-proxy-jlh5q" [9e8210a5-1357-4e4a-902a-93a4801e0509] Running
	I1115 09:09:58.397609  360443 system_pods.go:89] "kube-scheduler-addons-454747" [b2b440de-ce6f-4202-aec3-7b2c9a9e5b60] Running
	I1115 09:09:58.397616  360443 system_pods.go:89] "metrics-server-85b7d694d7-m85dj" [0cd080a3-9d1c-497f-8366-db37bda2a923] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:09:58.397622  360443 system_pods.go:89] "nvidia-device-plugin-daemonset-58w8g" [074fe19e-299a-47d4-b11d-39059b797509] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:09:58.397627  360443 system_pods.go:89] "registry-6b586f9694-mqjdw" [7ed7e9cf-6050-4f40-b957-f78707890861] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:09:58.397633  360443 system_pods.go:89] "registry-creds-764b6fb674-gckbr" [799c7fb7-4643-4a6c-ad1f-e02d10f99902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:09:58.397637  360443 system_pods.go:89] "registry-proxy-pspnm" [4fe4b793-40d0-4349-955b-fce89850d82b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:09:58.397642  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nwkcn" [366e261d-64fb-4867-a32c-9e4a4b404a31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:09:58.397651  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t9lwf" [4eb66a49-c31b-4612-bb18-66f0769762fe] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:09:58.397659  360443 system_pods.go:89] "storage-provisioner" [1b40db86-a278-4988-8866-14d72b2d608a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:09:58.397676  360443 retry.go:31] will retry after 447.659541ms: missing components: kube-dns
	I1115 09:09:58.507250  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:58.507388  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:58.609694  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:58.818266  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:58.850441  360443 system_pods.go:86] 20 kube-system pods found
	I1115 09:09:58.850481  360443 system_pods.go:89] "amd-gpu-device-plugin-z8k7m" [8cc4171b-54ae-4353-9ac3-b8f4de94b486] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1115 09:09:58.850489  360443 system_pods.go:89] "coredns-66bc5c9577-cjxcs" [5e1520e6-262d-4791-8a6c-02723fd2722f] Running
	I1115 09:09:58.850497  360443 system_pods.go:89] "csi-hostpath-attacher-0" [6698b44f-d001-4c25-b60f-09940dcb56c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1115 09:09:58.850502  360443 system_pods.go:89] "csi-hostpath-resizer-0" [875fe603-0fa1-4bee-b391-4ae10fe0542a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1115 09:09:58.850508  360443 system_pods.go:89] "csi-hostpathplugin-zkcmq" [ce167230-ac85-431a-acf8-3a672b1aa5ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1115 09:09:58.850512  360443 system_pods.go:89] "etcd-addons-454747" [d0759de5-4799-4c33-82cb-2e3031947785] Running
	I1115 09:09:58.850516  360443 system_pods.go:89] "kindnet-wq26q" [11f8d927-49fd-4232-8c9f-96bccb76673a] Running
	I1115 09:09:58.850520  360443 system_pods.go:89] "kube-apiserver-addons-454747" [d7bf8535-2d7a-40fa-a045-1f51fe7e98f5] Running
	I1115 09:09:58.850525  360443 system_pods.go:89] "kube-controller-manager-addons-454747" [99633a87-dd53-4d17-a16c-319c7424f0db] Running
	I1115 09:09:58.850533  360443 system_pods.go:89] "kube-ingress-dns-minikube" [c7585e9f-c4af-4c2a-af6b-13c2612f3939] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1115 09:09:58.850538  360443 system_pods.go:89] "kube-proxy-jlh5q" [9e8210a5-1357-4e4a-902a-93a4801e0509] Running
	I1115 09:09:58.850551  360443 system_pods.go:89] "kube-scheduler-addons-454747" [b2b440de-ce6f-4202-aec3-7b2c9a9e5b60] Running
	I1115 09:09:58.850560  360443 system_pods.go:89] "metrics-server-85b7d694d7-m85dj" [0cd080a3-9d1c-497f-8366-db37bda2a923] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1115 09:09:58.850568  360443 system_pods.go:89] "nvidia-device-plugin-daemonset-58w8g" [074fe19e-299a-47d4-b11d-39059b797509] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1115 09:09:58.850582  360443 system_pods.go:89] "registry-6b586f9694-mqjdw" [7ed7e9cf-6050-4f40-b957-f78707890861] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1115 09:09:58.850591  360443 system_pods.go:89] "registry-creds-764b6fb674-gckbr" [799c7fb7-4643-4a6c-ad1f-e02d10f99902] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1115 09:09:58.850599  360443 system_pods.go:89] "registry-proxy-pspnm" [4fe4b793-40d0-4349-955b-fce89850d82b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1115 09:09:58.850609  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nwkcn" [366e261d-64fb-4867-a32c-9e4a4b404a31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:09:58.850617  360443 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t9lwf" [4eb66a49-c31b-4612-bb18-66f0769762fe] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1115 09:09:58.850623  360443 system_pods.go:89] "storage-provisioner" [1b40db86-a278-4988-8866-14d72b2d608a] Running
	I1115 09:09:58.850631  360443 system_pods.go:126] duration metric: took 1.006277333s to wait for k8s-apps to be running ...
	I1115 09:09:58.850640  360443 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:09:58.850688  360443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:09:58.864101  360443 system_svc.go:56] duration metric: took 13.450668ms WaitForService to wait for kubelet
	I1115 09:09:58.864128  360443 kubeadm.go:587] duration metric: took 42.573922418s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:09:58.864144  360443 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:09:58.867050  360443 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:09:58.867077  360443 node_conditions.go:123] node cpu capacity is 8
	I1115 09:09:58.867091  360443 node_conditions.go:105] duration metric: took 2.942859ms to run NodePressure ...
	I1115 09:09:58.867106  360443 start.go:242] waiting for startup goroutines ...
	I1115 09:09:59.006048  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:59.006651  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:59.109695  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:59.317691  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:09:59.508016  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:09:59.508272  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:09:59.612040  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:09:59.818027  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:00.008595  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:00.008912  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:00.110117  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:00.318170  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:00.506251  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:00.506747  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:00.609930  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:00.817681  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:01.007208  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:01.007700  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:01.109987  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:01.317038  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:01.507825  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:01.507888  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:01.610558  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:01.818300  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:02.007613  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:02.007643  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:02.109634  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:02.318826  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:02.507231  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:02.507345  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:02.609762  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:02.818367  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:03.006813  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:03.007079  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:03.109935  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:03.317203  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:03.506090  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:03.506500  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:03.609754  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:03.817798  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:04.007159  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:04.007551  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:04.109945  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:04.317584  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:04.506862  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:04.507546  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:04.609736  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:04.818815  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:05.006806  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:05.006936  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:05.110199  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:05.317257  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:05.506787  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:05.507130  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:05.609577  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:05.817654  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:06.006965  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:06.007600  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:06.109196  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:06.318575  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:06.506847  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:06.507472  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:06.609519  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:06.817317  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:07.006744  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:07.007026  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:07.108690  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:07.317947  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:07.507054  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:07.507078  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:07.609406  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:07.817225  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:08.006562  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:08.006925  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:08.109788  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:08.317376  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:08.506652  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:08.507187  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:08.609464  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:08.818446  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:09.006290  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:09.007280  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:09.109512  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:09.318024  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:09.507197  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:09.507436  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:09.609562  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:09.820454  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:10.007190  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:10.007838  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:10.109895  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:10.318074  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:10.507111  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:10.507205  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:10.609246  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:10.817738  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:11.006673  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:11.007515  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:11.110416  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:11.318057  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:11.506659  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:11.506819  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:11.609894  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:11.817029  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:12.007317  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:12.007372  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:12.109190  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:12.317531  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:12.506682  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:12.507315  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:12.609087  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:12.817852  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:13.007213  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:13.007366  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:13.109260  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:13.317790  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:13.506541  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:13.507231  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:13.609245  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:13.817549  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:14.007093  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:14.007151  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:14.109303  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:14.318105  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:14.506783  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:14.507208  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:14.609438  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:14.843897  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:15.007726  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:15.007770  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:15.109766  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:15.318191  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:15.506688  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:15.507742  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:15.610283  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:15.817575  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:16.006577  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:16.006996  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:16.109684  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:16.318303  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:16.506565  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:16.507362  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:16.609232  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:16.817681  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:17.007236  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:17.008614  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:17.110284  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:17.317799  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:17.506931  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:17.507566  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:17.609792  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:17.817668  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:18.006819  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:18.007451  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:18.109681  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:18.318543  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:18.506311  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:18.507048  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:18.609845  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:18.817503  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:19.007322  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:19.007436  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:19.109221  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:19.317954  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:19.507048  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:19.507092  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:19.610326  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:19.818481  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:20.006721  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:20.007361  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:20.109587  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:20.317697  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:20.506625  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:20.507339  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:20.609064  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:20.817188  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:21.005822  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:21.006478  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:21.109295  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:21.317367  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:21.506484  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:21.507000  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:21.609662  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:21.818416  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:22.006862  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:22.007369  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:22.109195  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:22.318253  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:22.506572  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:22.506838  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:22.609442  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:22.818420  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:23.007351  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:23.007549  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:23.109671  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:23.318354  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:23.506326  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:23.506831  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:23.609417  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:23.817835  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:24.007100  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:24.007126  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:24.109443  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:24.318267  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:24.506552  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:24.506837  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:24.610107  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:24.817894  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:25.007836  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:25.007972  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:25.109084  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:25.317769  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:25.506778  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:25.507271  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:25.609034  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:25.817176  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:26.006512  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:26.006888  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:26.109843  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:26.316895  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:26.507931  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:26.507985  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:26.610018  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:26.817475  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:27.006978  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:27.007143  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:27.108513  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:27.317921  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:27.507111  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:27.507111  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:27.608822  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:27.817013  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:28.006683  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:28.007861  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:28.109482  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:28.317601  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:28.507898  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:28.510584  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:28.610312  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:28.818052  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:29.007229  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:29.007540  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:29.110007  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:29.317105  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:29.505778  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1115 09:10:29.507824  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:29.609865  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:29.817193  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:30.006723  360443 kapi.go:107] duration metric: took 1m12.003785212s to wait for kubernetes.io/minikube-addons=registry ...
	I1115 09:10:30.007052  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:30.109972  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:30.318412  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:30.507932  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:30.610018  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:30.817631  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:31.008426  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:31.109794  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:31.318349  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:31.507744  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:31.609617  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:31.817737  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:32.007292  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:32.109234  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:32.317783  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:32.507645  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:32.609114  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:32.817049  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:33.008008  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:33.109721  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:33.318038  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:33.507907  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:33.609681  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:33.817935  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:34.007773  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:34.109758  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:34.318180  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:34.509241  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:34.608853  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:34.817134  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:35.008535  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:35.110080  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:35.317180  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:35.508319  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:35.609071  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1115 09:10:35.818110  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:36.009355  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:36.116049  360443 kapi.go:107] duration metric: took 1m11.510114662s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1115 09:10:36.117773  360443 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-454747 cluster.
	I1115 09:10:36.119035  360443 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1115 09:10:36.120325  360443 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1115 09:10:36.318976  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:36.508624  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:36.818301  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:37.008235  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:37.318368  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:37.528141  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:37.816817  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:38.008134  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:38.317326  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:38.508020  360443 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1115 09:10:38.817663  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:39.007367  360443 kapi.go:107] duration metric: took 1m21.003225168s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1115 09:10:39.317905  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:39.817862  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:40.317743  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:40.821070  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:41.318158  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:41.818413  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:42.317829  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:42.818411  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:43.316902  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:43.817715  360443 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1115 09:10:44.317588  360443 kapi.go:107] duration metric: took 1m26.003759509s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1115 09:10:44.319488  360443 out.go:179] * Enabled addons: cloud-spanner, registry-creds, ingress-dns, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1115 09:10:44.320693  360443 addons.go:515] duration metric: took 1m28.030447552s for enable addons: enabled=[cloud-spanner registry-creds ingress-dns amd-gpu-device-plugin nvidia-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher inspektor-gadget default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1115 09:10:44.320733  360443 start.go:247] waiting for cluster config update ...
	I1115 09:10:44.320756  360443 start.go:256] writing updated cluster config ...
	I1115 09:10:44.321030  360443 ssh_runner.go:195] Run: rm -f paused
	I1115 09:10:44.325050  360443 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:10:44.328332  360443 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cjxcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.332380  360443 pod_ready.go:94] pod "coredns-66bc5c9577-cjxcs" is "Ready"
	I1115 09:10:44.332426  360443 pod_ready.go:86] duration metric: took 4.072001ms for pod "coredns-66bc5c9577-cjxcs" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.334333  360443 pod_ready.go:83] waiting for pod "etcd-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.337901  360443 pod_ready.go:94] pod "etcd-addons-454747" is "Ready"
	I1115 09:10:44.337921  360443 pod_ready.go:86] duration metric: took 3.555974ms for pod "etcd-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.339625  360443 pod_ready.go:83] waiting for pod "kube-apiserver-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.342939  360443 pod_ready.go:94] pod "kube-apiserver-addons-454747" is "Ready"
	I1115 09:10:44.342959  360443 pod_ready.go:86] duration metric: took 3.313237ms for pod "kube-apiserver-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.344758  360443 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.728966  360443 pod_ready.go:94] pod "kube-controller-manager-addons-454747" is "Ready"
	I1115 09:10:44.728994  360443 pod_ready.go:86] duration metric: took 384.215389ms for pod "kube-controller-manager-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:44.929612  360443 pod_ready.go:83] waiting for pod "kube-proxy-jlh5q" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:45.329146  360443 pod_ready.go:94] pod "kube-proxy-jlh5q" is "Ready"
	I1115 09:10:45.329175  360443 pod_ready.go:86] duration metric: took 399.53063ms for pod "kube-proxy-jlh5q" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:45.529709  360443 pod_ready.go:83] waiting for pod "kube-scheduler-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:45.928984  360443 pod_ready.go:94] pod "kube-scheduler-addons-454747" is "Ready"
	I1115 09:10:45.929017  360443 pod_ready.go:86] duration metric: took 399.279192ms for pod "kube-scheduler-addons-454747" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:10:45.929032  360443 pod_ready.go:40] duration metric: took 1.603950365s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:10:45.974999  360443 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 09:10:45.976892  360443 out.go:179] * Done! kubectl is now configured to use "addons-454747" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 09:11:15 addons-454747 crio[775]: time="2025-11-15T09:11:15.104070608Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=866baba9-047b-4861-a9f7-984cd2957596 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:11:15 addons-454747 crio[775]: time="2025-11-15T09:11:15.121446736Z" level=info msg="Image docker.io/nginx:alpine not found" id=866baba9-047b-4861-a9f7-984cd2957596 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:11:15 addons-454747 crio[775]: time="2025-11-15T09:11:15.121518201Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=866baba9-047b-4861-a9f7-984cd2957596 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:11:15 addons-454747 crio[775]: time="2025-11-15T09:11:15.133002787Z" level=info msg="Pulled image: docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b" id=af4a83d9-47ae-4577-8399-3141814bb226 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:11:15 addons-454747 crio[775]: time="2025-11-15T09:11:15.133529695Z" level=info msg="Checking image status: docker.io/nginx:latest" id=9c7dad73-2ed3-474e-9e93-551a051a915b name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:11:15 addons-454747 crio[775]: time="2025-11-15T09:11:15.135115038Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=15aeab9d-f8b7-4f38-bd9e-adfcfaa0c0d2 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:11:15 addons-454747 crio[775]: time="2025-11-15T09:11:15.135572335Z" level=info msg="Checking image status: docker.io/nginx" id=7a150280-b8d6-46fa-b5e6-43d52ca3e620 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:11:15 addons-454747 crio[775]: time="2025-11-15T09:11:15.13678627Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Nov 15 09:11:15 addons-454747 crio[775]: time="2025-11-15T09:11:15.140070256Z" level=info msg="Creating container: default/task-pv-pod/task-pv-container" id=0a457257-3efc-45b1-aeb8-ee4450f3ca95 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:11:15 addons-454747 crio[775]: time="2025-11-15T09:11:15.140180022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:11:15 addons-454747 crio[775]: time="2025-11-15T09:11:15.146309509Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:11:15 addons-454747 crio[775]: time="2025-11-15T09:11:15.146825004Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:11:15 addons-454747 crio[775]: time="2025-11-15T09:11:15.172630952Z" level=info msg="Created container a3cb55da27e4038085f2e59f1be7fee053873f29b56f5eb00a35dd1e5fca3163: default/task-pv-pod/task-pv-container" id=0a457257-3efc-45b1-aeb8-ee4450f3ca95 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:11:15 addons-454747 crio[775]: time="2025-11-15T09:11:15.17314308Z" level=info msg="Starting container: a3cb55da27e4038085f2e59f1be7fee053873f29b56f5eb00a35dd1e5fca3163" id=6674ef5b-0deb-4810-abe5-e49de9d169e3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:11:15 addons-454747 crio[775]: time="2025-11-15T09:11:15.174901308Z" level=info msg="Started container" PID=7376 containerID=a3cb55da27e4038085f2e59f1be7fee053873f29b56f5eb00a35dd1e5fca3163 description=default/task-pv-pod/task-pv-container id=6674ef5b-0deb-4810-abe5-e49de9d169e3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e12e226f615e38d214894f44c2a31e8eeedaef48cf7c20d08056f26768b7b5e
	Nov 15 09:11:17 addons-454747 crio[775]: time="2025-11-15T09:11:17.178339679Z" level=info msg="Pulled image: docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7" id=15aeab9d-f8b7-4f38-bd9e-adfcfaa0c0d2 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:11:17 addons-454747 crio[775]: time="2025-11-15T09:11:17.17946318Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=10c657ea-c403-4145-9130-0d6917ffa799 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:11:17 addons-454747 crio[775]: time="2025-11-15T09:11:17.181456872Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=4b1a5aa6-40ac-44ff-989c-5369f1798238 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:11:17 addons-454747 crio[775]: time="2025-11-15T09:11:17.185779922Z" level=info msg="Creating container: default/nginx/nginx" id=f1dde3ec-7488-40c2-a5d1-abe7314d8f53 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:11:17 addons-454747 crio[775]: time="2025-11-15T09:11:17.185919429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:11:17 addons-454747 crio[775]: time="2025-11-15T09:11:17.196677947Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:11:17 addons-454747 crio[775]: time="2025-11-15T09:11:17.197165438Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:11:17 addons-454747 crio[775]: time="2025-11-15T09:11:17.234266529Z" level=info msg="Created container 407d93404914b8e9a7049cddec94b2d862b77527a1857cdb9107e9cad9030c96: default/nginx/nginx" id=f1dde3ec-7488-40c2-a5d1-abe7314d8f53 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:11:17 addons-454747 crio[775]: time="2025-11-15T09:11:17.234838845Z" level=info msg="Starting container: 407d93404914b8e9a7049cddec94b2d862b77527a1857cdb9107e9cad9030c96" id=7bd06386-a34f-4ad4-b110-fd9540260914 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:11:17 addons-454747 crio[775]: time="2025-11-15T09:11:17.236673043Z" level=info msg="Started container" PID=7564 containerID=407d93404914b8e9a7049cddec94b2d862b77527a1857cdb9107e9cad9030c96 description=default/nginx/nginx id=7bd06386-a34f-4ad4-b110-fd9540260914 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed978a99b5b34b0d68668da92ace1801a456d1c59695be3f8a31d4b245cdf5fe
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	407d93404914b       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 seconds ago        Running             nginx                                    0                   ed978a99b5b34       nginx                                      default
	a3cb55da27e40       docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b                                              4 seconds ago        Running             task-pv-container                        0                   1e12e226f615e       task-pv-pod                                default
	152bfae953f10       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          30 seconds ago       Running             busybox                                  0                   96196cf71c8c2       busybox                                    default
	a113ced30ad2c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          35 seconds ago       Running             csi-snapshotter                          0                   387984f9264d2       csi-hostpathplugin-zkcmq                   kube-system
	15b0038a933d3       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          36 seconds ago       Running             csi-provisioner                          0                   387984f9264d2       csi-hostpathplugin-zkcmq                   kube-system
	32d50218303b0       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            38 seconds ago       Running             liveness-probe                           0                   387984f9264d2       csi-hostpathplugin-zkcmq                   kube-system
	9585fc97c2461       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           39 seconds ago       Running             hostpath                                 0                   387984f9264d2       csi-hostpathplugin-zkcmq                   kube-system
	7e1f8d44b44d4       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             40 seconds ago       Running             controller                               0                   7560370900d1a       ingress-nginx-controller-6c8bf45fb-vhvjt   ingress-nginx
	ca82589ebc097       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 43 seconds ago       Running             gcp-auth                                 0                   0c1a9f81077e2       gcp-auth-78565c9fb4-gtlhb                  gcp-auth
	29cde6adf092c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                46 seconds ago       Running             node-driver-registrar                    0                   387984f9264d2       csi-hostpathplugin-zkcmq                   kube-system
	7a9c917944476       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            47 seconds ago       Running             gadget                                   0                   02e394c36fb8f       gadget-5lh8b                               gadget
	c7e613941608e       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              50 seconds ago       Running             registry-proxy                           0                   220245d49113e       registry-proxy-pspnm                       kube-system
	1fb29add2d5a8       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     52 seconds ago       Running             amd-gpu-device-plugin                    0                   ee65cc22eb39e       amd-gpu-device-plugin-z8k7m                kube-system
	f093743456ae5       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   54 seconds ago       Running             csi-external-health-monitor-controller   0                   387984f9264d2       csi-hostpathplugin-zkcmq                   kube-system
	9a64b60b839d5       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     54 seconds ago       Running             nvidia-device-plugin-ctr                 0                   3221f2c6eeda5       nvidia-device-plugin-daemonset-58w8g       kube-system
	d318e1e5a03be       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      59 seconds ago       Running             volume-snapshot-controller               0                   ec29b3db79937       snapshot-controller-7d9fbc56b8-t9lwf       kube-system
	39e0b3ce59231       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              59 seconds ago       Running             csi-resizer                              0                   b40a0896bbc48       csi-hostpath-resizer-0                     kube-system
	c5cc58d4c65e1       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   b7c2cf2351e75       yakd-dashboard-5ff678cb9-lzndj             yakd-dashboard
	dd10873e5c8f4       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   eab6795f7d68d       csi-hostpath-attacher-0                    kube-system
	92dbc66a225a6       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   d722173cf2abe       snapshot-controller-7d9fbc56b8-nwkcn       kube-system
	3214cef25f9cd       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             About a minute ago   Exited              patch                                    1                   ed6df1585bf5d       ingress-nginx-admission-patch-kpcl9        ingress-nginx
	e8c5cef164c32       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   About a minute ago   Exited              create                                   0                   fc7f842cda60f       ingress-nginx-admission-create-2bvdg       ingress-nginx
	88c5555ce7baf       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   a389a29ca91f4       local-path-provisioner-648f6765c9-wsqdl    local-path-storage
	f62ecc77b12f0       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               About a minute ago   Running             cloud-spanner-emulator                   0                   cb4dbe29313b3       cloud-spanner-emulator-6f9fcf858b-nnvcj    default
	61c26678bcffa       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   1c90d62b414ce       registry-6b586f9694-mqjdw                  kube-system
	c485f7a9c3e2b       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   1205721f2368e       metrics-server-85b7d694d7-m85dj            kube-system
	7370b2befcb1e       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   e6cf854e50f39       kube-ingress-dns-minikube                  kube-system
	79d436a219f2f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   8300ce2cf4229       coredns-66bc5c9577-cjxcs                   kube-system
	73844762f5663       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   b42d483742979       storage-provisioner                        kube-system
	bb9cab6c50c64       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             2 minutes ago        Running             kube-proxy                               0                   9ce03c7023a0e       kube-proxy-jlh5q                           kube-system
	ab522c42d68a8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             2 minutes ago        Running             kindnet-cni                              0                   d9711bc751312       kindnet-wq26q                              kube-system
	6dd9f12c0f48a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   a604006d5d7ac       etcd-addons-454747                         kube-system
	a73de86856e0e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   86be02d7f77e6       kube-controller-manager-addons-454747      kube-system
	475fb5d70b555       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   06152db044df1       kube-apiserver-addons-454747               kube-system
	b4dce63e838db       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   b3ecfc13a7179       kube-scheduler-addons-454747               kube-system
	
	
	==> coredns [79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c] <==
	[INFO] 10.244.0.13:43896 - 10245 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000161527s
	[INFO] 10.244.0.13:41014 - 12502 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000064881s
	[INFO] 10.244.0.13:41014 - 12217 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000091532s
	[INFO] 10.244.0.13:47865 - 1025 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00006685s
	[INFO] 10.244.0.13:47865 - 751 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000116121s
	[INFO] 10.244.0.13:36545 - 5572 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011537s
	[INFO] 10.244.0.13:36545 - 5356 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000152312s
	[INFO] 10.244.0.22:51628 - 58488 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000208738s
	[INFO] 10.244.0.22:43571 - 15949 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000164573s
	[INFO] 10.244.0.22:34504 - 26080 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152145s
	[INFO] 10.244.0.22:60041 - 32631 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119485s
	[INFO] 10.244.0.22:60016 - 24949 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114295s
	[INFO] 10.244.0.22:46220 - 32832 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000172907s
	[INFO] 10.244.0.22:58100 - 7112 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003422976s
	[INFO] 10.244.0.22:60829 - 42970 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003492397s
	[INFO] 10.244.0.22:57149 - 21155 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004975862s
	[INFO] 10.244.0.22:58428 - 3630 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005117315s
	[INFO] 10.244.0.22:47578 - 48598 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004691508s
	[INFO] 10.244.0.22:56891 - 62201 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004787414s
	[INFO] 10.244.0.22:57346 - 23454 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005262059s
	[INFO] 10.244.0.22:50455 - 3251 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005379554s
	[INFO] 10.244.0.22:37324 - 56591 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000861225s
	[INFO] 10.244.0.22:41530 - 45664 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.00208366s
	[INFO] 10.244.0.27:56655 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0002703s
	[INFO] 10.244.0.27:40442 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000193982s
	
	
	==> describe nodes <==
	Name:               addons-454747
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-454747
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=addons-454747
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_09_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-454747
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-454747"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:09:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-454747
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:11:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:11:13 +0000   Sat, 15 Nov 2025 09:09:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:11:13 +0000   Sat, 15 Nov 2025 09:09:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:11:13 +0000   Sat, 15 Nov 2025 09:09:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:11:13 +0000   Sat, 15 Nov 2025 09:09:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-454747
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                770a3a40-fc20-448c-8377-e5435651e3a8
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  default                     cloud-spanner-emulator-6f9fcf858b-nnvcj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  gadget                      gadget-5lh8b                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  gcp-auth                    gcp-auth-78565c9fb4-gtlhb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-vhvjt    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         2m2s
	  kube-system                 amd-gpu-device-plugin-z8k7m                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 coredns-66bc5c9577-cjxcs                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 csi-hostpathplugin-zkcmq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 etcd-addons-454747                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m9s
	  kube-system                 kindnet-wq26q                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m3s
	  kube-system                 kube-apiserver-addons-454747                250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-addons-454747       200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-jlh5q                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-addons-454747                100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 metrics-server-85b7d694d7-m85dj             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         2m2s
	  kube-system                 nvidia-device-plugin-daemonset-58w8g        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 registry-6b586f9694-mqjdw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-creds-764b6fb674-gckbr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 registry-proxy-pspnm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 snapshot-controller-7d9fbc56b8-nwkcn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-t9lwf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  local-path-storage          local-path-provisioner-648f6765c9-wsqdl     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-lzndj              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     2m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m1s                   kube-proxy       
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node addons-454747 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m14s)  kubelet          Node addons-454747 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x8 over 2m14s)  kubelet          Node addons-454747 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m9s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s                   kubelet          Node addons-454747 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s                   kubelet          Node addons-454747 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s                   kubelet          Node addons-454747 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m4s                   node-controller  Node addons-454747 event: Registered Node addons-454747 in Controller
	  Normal  NodeReady                82s                    kubelet          Node addons-454747 status is now: NodeReady
	
	
	==> dmesg <==
	[  +4.895287] kauditd_printk_skb: 47 callbacks suppressed
	[Nov15 09:05] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e 8a 50 12 9c 18 08 06
	[ +16.382722] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 fc 8e f8 3e 4d 08 06
	[  +0.000404] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e 8a 50 12 9c 18 08 06
	[ +11.456091] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ca 4b b6 f1 73 ad 08 06
	[ +11.372428] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[  +5.372949] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e 1f 8a 15 17 62 08 06
	[Nov15 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 33 62 c7 21 d2 08 06
	[  +0.000336] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 1f 8a 15 17 62 08 06
	[ +16.664453] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 95 ea ef 71 6a 08 06
	[  +0.000392] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 4b b6 f1 73 ad 08 06
	[  +8.261506] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	
	
	==> etcd [6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072] <==
	{"level":"warn","ts":"2025-11-15T09:09:07.647427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.654019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.660275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.667237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.684597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.691823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.704108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.710260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.716848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.722871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.728614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.734715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.741050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.771138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.777348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.783549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:07.831228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:18.673012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:18.680131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:45.232967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:45.239805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:45.250672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:09:45.256836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37994","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T09:10:40.818352Z","caller":"traceutil/trace.go:172","msg":"trace[829510516] transaction","detail":"{read_only:false; response_revision:1237; number_of_response:1; }","duration":"106.965678ms","start":"2025-11-15T09:10:40.711369Z","end":"2025-11-15T09:10:40.818334Z","steps":["trace[829510516] 'process raft request'  (duration: 106.850155ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T09:10:52.609184Z","caller":"traceutil/trace.go:172","msg":"trace[127138712] transaction","detail":"{read_only:false; response_revision:1296; number_of_response:1; }","duration":"129.342764ms","start":"2025-11-15T09:10:52.479825Z","end":"2025-11-15T09:10:52.609168Z","steps":["trace[127138712] 'process raft request'  (duration: 129.212638ms)"],"step_count":1}
	
	
	==> gcp-auth [ca82589ebc0974a7dfdb0ba2b8e31093ad90584fc9cd7c1cdf70a130408f4837] <==
	2025/11/15 09:10:35 GCP Auth Webhook started!
	2025/11/15 09:10:46 Ready to marshal response ...
	2025/11/15 09:10:46 Ready to write response ...
	2025/11/15 09:10:46 Ready to marshal response ...
	2025/11/15 09:10:46 Ready to write response ...
	2025/11/15 09:10:46 Ready to marshal response ...
	2025/11/15 09:10:46 Ready to write response ...
	2025/11/15 09:10:57 Ready to marshal response ...
	2025/11/15 09:10:57 Ready to write response ...
	2025/11/15 09:10:57 Ready to marshal response ...
	2025/11/15 09:10:57 Ready to write response ...
	2025/11/15 09:11:06 Ready to marshal response ...
	2025/11/15 09:11:06 Ready to write response ...
	2025/11/15 09:11:08 Ready to marshal response ...
	2025/11/15 09:11:08 Ready to write response ...
	2025/11/15 09:11:11 Ready to marshal response ...
	2025/11/15 09:11:11 Ready to write response ...
	2025/11/15 09:11:14 Ready to marshal response ...
	2025/11/15 09:11:14 Ready to write response ...
	
	
	==> kernel <==
	 09:11:19 up 53 min,  0 user,  load average: 0.88, 2.39, 2.44
	Linux addons-454747 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641] <==
	E1115 09:09:47.358337       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 09:09:47.358506       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 09:09:47.358503       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 09:09:47.358577       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 09:09:48.857731       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 09:09:48.857761       1 metrics.go:72] Registering metrics
	I1115 09:09:48.857841       1 controller.go:711] "Syncing nftables rules"
	I1115 09:09:57.360243       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:09:57.360319       1 main.go:301] handling current node
	I1115 09:10:07.357215       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:10:07.357274       1 main.go:301] handling current node
	I1115 09:10:17.356608       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:10:17.356646       1 main.go:301] handling current node
	I1115 09:10:27.356743       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:10:27.356798       1 main.go:301] handling current node
	I1115 09:10:37.357168       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:10:37.357199       1 main.go:301] handling current node
	I1115 09:10:47.356975       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:10:47.357007       1 main.go:301] handling current node
	I1115 09:10:57.356879       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:10:57.357028       1 main.go:301] handling current node
	I1115 09:11:07.356506       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:11:07.356537       1 main.go:301] handling current node
	I1115 09:11:17.356534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:11:17.356572       1 main.go:301] handling current node
	
	
	==> kube-apiserver [475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8] <==
	E1115 09:10:06.834292       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1115 09:10:06.834573       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.210.113:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.210.113:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.210.113:443: connect: connection refused" logger="UnhandledError"
	W1115 09:10:07.836564       1 handler_proxy.go:99] no RequestInfo found in the context
	W1115 09:10:07.836592       1 handler_proxy.go:99] no RequestInfo found in the context
	E1115 09:10:07.836619       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1115 09:10:07.836634       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1115 09:10:07.836660       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1115 09:10:07.837674       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1115 09:10:08.291832       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1115 09:10:11.845120       1 handler_proxy.go:99] no RequestInfo found in the context
	E1115 09:10:11.845178       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1115 09:10:11.845180       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.210.113:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.210.113:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1115 09:10:56.645838       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46000: use of closed network connection
	E1115 09:10:56.798950       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46024: use of closed network connection
	I1115 09:11:14.154456       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1115 09:11:14.388580       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.74.1"}
	
	
	==> kube-controller-manager [a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b] <==
	I1115 09:09:15.215335       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 09:09:15.215377       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 09:09:15.215402       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 09:09:15.215422       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 09:09:15.215473       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 09:09:15.215511       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 09:09:15.215555       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 09:09:15.215629       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 09:09:15.215718       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 09:09:15.215740       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 09:09:15.215921       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 09:09:15.217947       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 09:09:15.222034       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:09:15.226192       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 09:09:15.231584       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 09:09:15.233873       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 09:09:15.239082       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	E1115 09:09:45.226940       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1115 09:09:45.227115       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1115 09:09:45.227174       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1115 09:09:45.241121       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1115 09:09:45.244968       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1115 09:09:45.328033       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:09:45.345387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 09:10:00.174595       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a] <==
	I1115 09:09:17.038868       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:09:17.339709       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:09:17.441569       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:09:17.446500       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:09:17.446669       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:09:17.629759       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:09:17.629921       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:09:17.638576       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:09:17.639017       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:09:17.639898       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:09:17.642286       1 config.go:200] "Starting service config controller"
	I1115 09:09:17.642336       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:09:17.642441       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:09:17.642457       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:09:17.642856       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:09:17.642896       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:09:17.643208       1 config.go:309] "Starting node config controller"
	I1115 09:09:17.643235       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:09:17.643245       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:09:17.742482       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:09:17.743691       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 09:09:17.743777       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f] <==
	E1115 09:09:08.238032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:09:08.238109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:09:08.238126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:09:08.238234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:09:08.238301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:09:08.238474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:09:08.238500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:09:08.238628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:09:08.238657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:09:08.238704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:09:08.238754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:09:08.238782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:09:08.238804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:09:08.238799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:09:08.239196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:09:09.145250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:09:09.265103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:09:09.341576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:09:09.350583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:09:09.379693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:09:09.390864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:09:09.392711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:09:09.405713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:09:09.423800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1115 09:09:09.834594       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 09:11:10 addons-454747 kubelet[1303]: I1115 09:11:10.125438    1303 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c3585dcae01c2987eaf2303ebe7bc57245f2038842a517edd3b0ec10a0b0525"
	Nov 15 09:11:10 addons-454747 kubelet[1303]: E1115 09:11:10.126981    1303 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-cb0fe8e1-5280-47d2-a0f7-3e04a804af72\" is forbidden: User \"system:node:addons-454747\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-454747' and this object" podUID="65646e76-1b4b-4df6-9121-da7dbb6fa0d4" pod="local-path-storage/helper-pod-delete-pvc-cb0fe8e1-5280-47d2-a0f7-3e04a804af72"
	Nov 15 09:11:10 addons-454747 kubelet[1303]: I1115 09:11:10.637919    1303 scope.go:117] "RemoveContainer" containerID="95bd511d5b32b58ac6b4eb5c782dad4b20fd98cabe46478440ddd88c7eccf638"
	Nov 15 09:11:10 addons-454747 kubelet[1303]: E1115 09:11:10.640241    1303 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-cb0fe8e1-5280-47d2-a0f7-3e04a804af72\" is forbidden: User \"system:node:addons-454747\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-454747' and this object" podUID="65646e76-1b4b-4df6-9121-da7dbb6fa0d4" pod="local-path-storage/helper-pod-delete-pvc-cb0fe8e1-5280-47d2-a0f7-3e04a804af72"
	Nov 15 09:11:10 addons-454747 kubelet[1303]: I1115 09:11:10.640661    1303 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65646e76-1b4b-4df6-9121-da7dbb6fa0d4" path="/var/lib/kubelet/pods/65646e76-1b4b-4df6-9121-da7dbb6fa0d4/volumes"
	Nov 15 09:11:10 addons-454747 kubelet[1303]: I1115 09:11:10.647070    1303 scope.go:117] "RemoveContainer" containerID="b61bc34e230e0a65f7c622bc6884961d4c0e436d9329ce1e5a403b66cffb521e"
	Nov 15 09:11:10 addons-454747 kubelet[1303]: I1115 09:11:10.655054    1303 scope.go:117] "RemoveContainer" containerID="6af94d058ea999a168c74dc547bde5ec5f6ad7c30d0d744ad6ce071a7b141dad"
	Nov 15 09:11:10 addons-454747 kubelet[1303]: I1115 09:11:10.665611    1303 scope.go:117] "RemoveContainer" containerID="5d5f23b51a4b0111b6525a58c121b47619af745f1d8e96a4eecae62b01d39610"
	Nov 15 09:11:10 addons-454747 kubelet[1303]: I1115 09:11:10.673497    1303 scope.go:117] "RemoveContainer" containerID="c932fdb0591935e94724a14c31ea2f1cc06d952713e41952c4b7018d11625b7d"
	Nov 15 09:11:11 addons-454747 kubelet[1303]: I1115 09:11:11.520205    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9dfb5a76-426a-4a88-ad65-9e3f47866261\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^06c2e32a-c203-11f0-afc4-ee51f342b8d8\") pod \"task-pv-pod\" (UID: \"ae0c0552-d79c-4749-8d8c-863c2a7f57a4\") " pod="default/task-pv-pod"
	Nov 15 09:11:11 addons-454747 kubelet[1303]: I1115 09:11:11.520255    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ae0c0552-d79c-4749-8d8c-863c2a7f57a4-gcp-creds\") pod \"task-pv-pod\" (UID: \"ae0c0552-d79c-4749-8d8c-863c2a7f57a4\") " pod="default/task-pv-pod"
	Nov 15 09:11:11 addons-454747 kubelet[1303]: I1115 09:11:11.520326    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcc77\" (UniqueName: \"kubernetes.io/projected/ae0c0552-d79c-4749-8d8c-863c2a7f57a4-kube-api-access-zcc77\") pod \"task-pv-pod\" (UID: \"ae0c0552-d79c-4749-8d8c-863c2a7f57a4\") " pod="default/task-pv-pod"
	Nov 15 09:11:11 addons-454747 kubelet[1303]: I1115 09:11:11.627427    1303 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-9dfb5a76-426a-4a88-ad65-9e3f47866261\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^06c2e32a-c203-11f0-afc4-ee51f342b8d8\") pod \"task-pv-pod\" (UID: \"ae0c0552-d79c-4749-8d8c-863c2a7f57a4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/3f962eabf35582e5a208879fc43a88a9291654df6304e156bb54e3ad04ffc884/globalmount\"" pod="default/task-pv-pod"
	Nov 15 09:11:12 addons-454747 kubelet[1303]: I1115 09:11:12.326982    1303 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/195a0d5a-38f2-4006-8a54-0e94daa6974f-gcp-creds\") pod \"195a0d5a-38f2-4006-8a54-0e94daa6974f\" (UID: \"195a0d5a-38f2-4006-8a54-0e94daa6974f\") "
	Nov 15 09:11:12 addons-454747 kubelet[1303]: I1115 09:11:12.327085    1303 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbhmm\" (UniqueName: \"kubernetes.io/projected/195a0d5a-38f2-4006-8a54-0e94daa6974f-kube-api-access-mbhmm\") pod \"195a0d5a-38f2-4006-8a54-0e94daa6974f\" (UID: \"195a0d5a-38f2-4006-8a54-0e94daa6974f\") "
	Nov 15 09:11:12 addons-454747 kubelet[1303]: I1115 09:11:12.327127    1303 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/195a0d5a-38f2-4006-8a54-0e94daa6974f-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "195a0d5a-38f2-4006-8a54-0e94daa6974f" (UID: "195a0d5a-38f2-4006-8a54-0e94daa6974f"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 15 09:11:12 addons-454747 kubelet[1303]: I1115 09:11:12.327292    1303 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/195a0d5a-38f2-4006-8a54-0e94daa6974f-gcp-creds\") on node \"addons-454747\" DevicePath \"\""
	Nov 15 09:11:12 addons-454747 kubelet[1303]: I1115 09:11:12.329819    1303 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195a0d5a-38f2-4006-8a54-0e94daa6974f-kube-api-access-mbhmm" (OuterVolumeSpecName: "kube-api-access-mbhmm") pod "195a0d5a-38f2-4006-8a54-0e94daa6974f" (UID: "195a0d5a-38f2-4006-8a54-0e94daa6974f"). InnerVolumeSpecName "kube-api-access-mbhmm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 15 09:11:12 addons-454747 kubelet[1303]: I1115 09:11:12.428521    1303 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mbhmm\" (UniqueName: \"kubernetes.io/projected/195a0d5a-38f2-4006-8a54-0e94daa6974f-kube-api-access-mbhmm\") on node \"addons-454747\" DevicePath \"\""
	Nov 15 09:11:12 addons-454747 kubelet[1303]: I1115 09:11:12.642149    1303 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195a0d5a-38f2-4006-8a54-0e94daa6974f" path="/var/lib/kubelet/pods/195a0d5a-38f2-4006-8a54-0e94daa6974f/volumes"
	Nov 15 09:11:13 addons-454747 kubelet[1303]: I1115 09:11:13.142710    1303 scope.go:117] "RemoveContainer" containerID="42ae811f9c5e40088dbe529ad78c0292a3f12c215dfb7f469491b4159edd2715"
	Nov 15 09:11:14 addons-454747 kubelet[1303]: I1115 09:11:14.446715    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/05b11fbe-56e5-4a05-b781-867491771b80-gcp-creds\") pod \"nginx\" (UID: \"05b11fbe-56e5-4a05-b781-867491771b80\") " pod="default/nginx"
	Nov 15 09:11:14 addons-454747 kubelet[1303]: I1115 09:11:14.446848    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx9rr\" (UniqueName: \"kubernetes.io/projected/05b11fbe-56e5-4a05-b781-867491771b80-kube-api-access-kx9rr\") pod \"nginx\" (UID: \"05b11fbe-56e5-4a05-b781-867491771b80\") " pod="default/nginx"
	Nov 15 09:11:16 addons-454747 kubelet[1303]: I1115 09:11:16.168382    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod" podStartSLOduration=1.734406401 podStartE2EDuration="5.168358303s" podCreationTimestamp="2025-11-15 09:11:11 +0000 UTC" firstStartedPulling="2025-11-15 09:11:11.700953426 +0000 UTC m=+121.141120167" lastFinishedPulling="2025-11-15 09:11:15.134905322 +0000 UTC m=+124.575072069" observedRunningTime="2025-11-15 09:11:16.167693738 +0000 UTC m=+125.607860500" watchObservedRunningTime="2025-11-15 09:11:16.168358303 +0000 UTC m=+125.608525063"
	Nov 15 09:11:18 addons-454747 kubelet[1303]: I1115 09:11:18.179328    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=2.120280882 podStartE2EDuration="4.179305202s" podCreationTimestamp="2025-11-15 09:11:14 +0000 UTC" firstStartedPulling="2025-11-15 09:11:15.121871756 +0000 UTC m=+124.562038494" lastFinishedPulling="2025-11-15 09:11:17.180896076 +0000 UTC m=+126.621062814" observedRunningTime="2025-11-15 09:11:18.178176751 +0000 UTC m=+127.618343510" watchObservedRunningTime="2025-11-15 09:11:18.179305202 +0000 UTC m=+127.619471963"
	
	
	==> storage-provisioner [73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9] <==
	W1115 09:10:54.617757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:10:56.621835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:10:56.630767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:10:58.634416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:10:58.639859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:00.643048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:00.647232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:02.650062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:02.654488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:04.658090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:04.662379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:06.665923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:06.671877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:08.675309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:08.679257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:10.683154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:10.687717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:12.690583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:12.694759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:14.699565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:14.705899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:16.709584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:16.714317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:18.717871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:11:18.721829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-454747 -n addons-454747
helpers_test.go:269: (dbg) Run:  kubectl --context addons-454747 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-2bvdg ingress-nginx-admission-patch-kpcl9 registry-creds-764b6fb674-gckbr
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-454747 describe pod ingress-nginx-admission-create-2bvdg ingress-nginx-admission-patch-kpcl9 registry-creds-764b6fb674-gckbr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-454747 describe pod ingress-nginx-admission-create-2bvdg ingress-nginx-admission-patch-kpcl9 registry-creds-764b6fb674-gckbr: exit status 1 (61.230536ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2bvdg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kpcl9" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-gckbr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-454747 describe pod ingress-nginx-admission-create-2bvdg ingress-nginx-admission-patch-kpcl9 registry-creds-764b6fb674-gckbr: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 addons disable headlamp --alsologtostderr -v=1: exit status 11 (243.84894ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:11:20.300275  371985 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:11:20.300566  371985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:20.300576  371985 out.go:374] Setting ErrFile to fd 2...
	I1115 09:11:20.300579  371985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:20.300759  371985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:11:20.301017  371985 mustload.go:66] Loading cluster: addons-454747
	I1115 09:11:20.301339  371985 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:20.301352  371985 addons.go:607] checking whether the cluster is paused
	I1115 09:11:20.301442  371985 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:20.301454  371985 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:11:20.301830  371985 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:11:20.320188  371985 ssh_runner.go:195] Run: systemctl --version
	I1115 09:11:20.320244  371985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:11:20.338668  371985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:11:20.432139  371985 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:11:20.432215  371985 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:11:20.461464  371985 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:11:20.461486  371985 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:11:20.461490  371985 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:11:20.461495  371985 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:11:20.461499  371985 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:11:20.461503  371985 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:11:20.461507  371985 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:11:20.461511  371985 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:11:20.461514  371985 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:11:20.461543  371985 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:11:20.461553  371985 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:11:20.461557  371985 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:11:20.461561  371985 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:11:20.461566  371985 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:11:20.461571  371985 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:11:20.461593  371985 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:11:20.461606  371985 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:11:20.461613  371985 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:11:20.461616  371985 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:11:20.461619  371985 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:11:20.461622  371985 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:11:20.461624  371985 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:11:20.461626  371985 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:11:20.461629  371985 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:11:20.461631  371985 cri.go:89] found id: ""
	I1115 09:11:20.461672  371985 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:11:20.476506  371985 out.go:203] 
	W1115 09:11:20.477889  371985 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:11:20.477912  371985 out.go:285] * 
	* 
	W1115 09:11:20.481829  371985 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:11:20.483055  371985 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-454747 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-nnvcj" [a3cdd792-f884-4d4c-bc37-9ef1fc505eda] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003256371s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (256.831975ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:11:18.605414  371311 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:11:18.605705  371311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:18.605718  371311 out.go:374] Setting ErrFile to fd 2...
	I1115 09:11:18.605725  371311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:18.606055  371311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:11:18.606660  371311 mustload.go:66] Loading cluster: addons-454747
	I1115 09:11:18.607102  371311 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:18.607124  371311 addons.go:607] checking whether the cluster is paused
	I1115 09:11:18.607258  371311 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:18.607278  371311 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:11:18.608009  371311 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:11:18.630348  371311 ssh_runner.go:195] Run: systemctl --version
	I1115 09:11:18.630436  371311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:11:18.649876  371311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:11:18.745684  371311 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:11:18.745768  371311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:11:18.776481  371311 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:11:18.776508  371311 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:11:18.776515  371311 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:11:18.776520  371311 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:11:18.776550  371311 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:11:18.776555  371311 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:11:18.776560  371311 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:11:18.776568  371311 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:11:18.776573  371311 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:11:18.776582  371311 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:11:18.776591  371311 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:11:18.776595  371311 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:11:18.776602  371311 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:11:18.776606  371311 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:11:18.776614  371311 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:11:18.776627  371311 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:11:18.776635  371311 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:11:18.776641  371311 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:11:18.776645  371311 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:11:18.776649  371311 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:11:18.776657  371311 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:11:18.776662  371311 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:11:18.776669  371311 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:11:18.776673  371311 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:11:18.776677  371311 cri.go:89] found id: ""
	I1115 09:11:18.776731  371311 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:11:18.791011  371311 out.go:203] 
	W1115 09:11:18.792327  371311 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:11:18.792345  371311 out.go:285] * 
	* 
	W1115 09:11:18.796262  371311 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:11:18.797471  371311 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-454747 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.16s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-454747 apply -f testdata/storage-provisioner-rancher/pvc.yaml
I1115 09:10:57.054052  359063 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:955: (dbg) Run:  kubectl --context addons-454747 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-454747 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [08bac910-37c7-4b96-8550-9ede2776bf44] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [08bac910-37c7-4b96-8550-9ede2776bf44] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [08bac910-37c7-4b96-8550-9ede2776bf44] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002925496s
addons_test.go:967: (dbg) Run:  kubectl --context addons-454747 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 ssh "cat /opt/local-path-provisioner/pvc-cb0fe8e1-5280-47d2-a0f7-3e04a804af72_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-454747 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-454747 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (254.474857ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:11:07.011125  369360 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:11:07.011372  369360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:07.011381  369360 out.go:374] Setting ErrFile to fd 2...
	I1115 09:11:07.011385  369360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:07.011628  369360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:11:07.011917  369360 mustload.go:66] Loading cluster: addons-454747
	I1115 09:11:07.012239  369360 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:07.012257  369360 addons.go:607] checking whether the cluster is paused
	I1115 09:11:07.012340  369360 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:07.012360  369360 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:11:07.012758  369360 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:11:07.031351  369360 ssh_runner.go:195] Run: systemctl --version
	I1115 09:11:07.031429  369360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:11:07.048865  369360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:11:07.142491  369360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:11:07.142624  369360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:11:07.174909  369360 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:11:07.174939  369360 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:11:07.174945  369360 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:11:07.174948  369360 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:11:07.174951  369360 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:11:07.174954  369360 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:11:07.174957  369360 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:11:07.174959  369360 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:11:07.174962  369360 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:11:07.174967  369360 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:11:07.174969  369360 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:11:07.174972  369360 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:11:07.174974  369360 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:11:07.174977  369360 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:11:07.174979  369360 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:11:07.174983  369360 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:11:07.174986  369360 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:11:07.174989  369360 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:11:07.174992  369360 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:11:07.174994  369360 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:11:07.174997  369360 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:11:07.175000  369360 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:11:07.175003  369360 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:11:07.175005  369360 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:11:07.175007  369360 cri.go:89] found id: ""
	I1115 09:11:07.175045  369360 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:11:07.191825  369360 out.go:203] 
	W1115 09:11:07.195538  369360 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:11:07.195567  369360 out.go:285] * 
	* 
	W1115 09:11:07.200027  369360 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:11:07.202078  369360 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-454747 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.16s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-58w8g" [074fe19e-299a-47d4-b11d-39059b797509] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004466511s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (319.530538ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:11:13.295298  369938 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:11:13.295680  369938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:13.295694  369938 out.go:374] Setting ErrFile to fd 2...
	I1115 09:11:13.295701  369938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:13.296019  369938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:11:13.296381  369938 mustload.go:66] Loading cluster: addons-454747
	I1115 09:11:13.296948  369938 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:13.296978  369938 addons.go:607] checking whether the cluster is paused
	I1115 09:11:13.297132  369938 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:13.297152  369938 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:11:13.297794  369938 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:11:13.323458  369938 ssh_runner.go:195] Run: systemctl --version
	I1115 09:11:13.323557  369938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:11:13.349297  369938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:11:13.457444  369938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:11:13.457546  369938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:11:13.497095  369938 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:11:13.497121  369938 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:11:13.497127  369938 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:11:13.497133  369938 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:11:13.497138  369938 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:11:13.497143  369938 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:11:13.497147  369938 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:11:13.497151  369938 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:11:13.497155  369938 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:11:13.497172  369938 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:11:13.497223  369938 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:11:13.497229  369938 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:11:13.497234  369938 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:11:13.497238  369938 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:11:13.497241  369938 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:11:13.497255  369938 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:11:13.497259  369938 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:11:13.497266  369938 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:11:13.497271  369938 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:11:13.497274  369938 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:11:13.497278  369938 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:11:13.497282  369938 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:11:13.497285  369938 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:11:13.497289  369938 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:11:13.497293  369938 cri.go:89] found id: ""
	I1115 09:11:13.497439  369938 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:11:13.517238  369938 out.go:203] 
	W1115 09:11:13.518651  369938 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:11:13.518675  369938 out.go:285] * 
	* 
	W1115 09:11:13.525156  369938 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:11:13.527376  369938 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-454747 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.33s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-lzndj" [7fc8ed5a-fc54-48fa-98fa-dd1edb118593] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003540015s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 addons disable yakd --alsologtostderr -v=1: exit status 11 (247.820613ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:11:08.365614  369520 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:11:08.365761  369520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:08.365771  369520 out.go:374] Setting ErrFile to fd 2...
	I1115 09:11:08.365776  369520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:08.366009  369520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:11:08.366295  369520 mustload.go:66] Loading cluster: addons-454747
	I1115 09:11:08.366704  369520 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:08.366725  369520 addons.go:607] checking whether the cluster is paused
	I1115 09:11:08.366836  369520 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:08.366853  369520 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:11:08.367223  369520 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:11:08.385053  369520 ssh_runner.go:195] Run: systemctl --version
	I1115 09:11:08.385109  369520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:11:08.403514  369520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:11:08.495890  369520 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:11:08.495966  369520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:11:08.531205  369520 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:11:08.531225  369520 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:11:08.531228  369520 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:11:08.531231  369520 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:11:08.531234  369520 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:11:08.531236  369520 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:11:08.531239  369520 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:11:08.531241  369520 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:11:08.531244  369520 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:11:08.531248  369520 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:11:08.531251  369520 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:11:08.531253  369520 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:11:08.531255  369520 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:11:08.531258  369520 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:11:08.531261  369520 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:11:08.531265  369520 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:11:08.531268  369520 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:11:08.531272  369520 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:11:08.531275  369520 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:11:08.531277  369520 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:11:08.531282  369520 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:11:08.531291  369520 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:11:08.531296  369520 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:11:08.531299  369520 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:11:08.531330  369520 cri.go:89] found id: ""
	I1115 09:11:08.531374  369520 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:11:08.546488  369520 out.go:203] 
	W1115 09:11:08.547566  369520 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:11:08.547582  369520 out.go:285] * 
	* 
	W1115 09:11:08.551599  369520 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:11:08.552837  369520 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-454747 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-z8k7m" [8cc4171b-54ae-4353-9ac3-b8f4de94b486] Running
I1115 09:10:57.058228  359063 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1115 09:10:57.058248  359063 kapi.go:107] duration metric: took 4.22013ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003719965s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-454747 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-454747 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (248.408089ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:11:03.114900  369057 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:11:03.115035  369057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:03.115050  369057 out.go:374] Setting ErrFile to fd 2...
	I1115 09:11:03.115055  369057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:11:03.115258  369057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:11:03.115568  369057 mustload.go:66] Loading cluster: addons-454747
	I1115 09:11:03.115930  369057 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:03.115948  369057 addons.go:607] checking whether the cluster is paused
	I1115 09:11:03.116026  369057 config.go:182] Loaded profile config "addons-454747": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:11:03.116038  369057 host.go:66] Checking if "addons-454747" exists ...
	I1115 09:11:03.116446  369057 cli_runner.go:164] Run: docker container inspect addons-454747 --format={{.State.Status}}
	I1115 09:11:03.135530  369057 ssh_runner.go:195] Run: systemctl --version
	I1115 09:11:03.135592  369057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-454747
	I1115 09:11:03.153569  369057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/addons-454747/id_rsa Username:docker}
	I1115 09:11:03.246992  369057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:11:03.247290  369057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:11:03.278657  369057 cri.go:89] found id: "a113ced30ad2c99dc403ac92e9dfe61f5e3ce0f4be7c9e9f0acab43b06d89fe0"
	I1115 09:11:03.278678  369057 cri.go:89] found id: "15b0038a933d37b2838d21433b2568fe5d94d613dda0eb6fbf049c541fcf5c56"
	I1115 09:11:03.278682  369057 cri.go:89] found id: "32d50218303b07c6c990a564ea77640b820092db6547dd9b4046ed776201c1a8"
	I1115 09:11:03.278687  369057 cri.go:89] found id: "9585fc97c246112ed82a751e15c1194440400ac5030a8a9c7d58b5aa86150536"
	I1115 09:11:03.278691  369057 cri.go:89] found id: "29cde6adf092c88d34433b9198540e57ef68a8382f89ee7fc1267af53dcf172e"
	I1115 09:11:03.278695  369057 cri.go:89] found id: "c7e613941608e46bb072c9edf5b39b164adf9fab823d38b4bf8ee67472dcf63b"
	I1115 09:11:03.278699  369057 cri.go:89] found id: "1fb29add2d5a81a27fec5212fb2c5e570a7b2ca2e241cd47618a2149cc385a1b"
	I1115 09:11:03.278704  369057 cri.go:89] found id: "f093743456ae57037f79efc1d1e2e78e87b57b261e2c0f985554d1036b780ee3"
	I1115 09:11:03.278707  369057 cri.go:89] found id: "9a64b60b839d5cbbb00c4ffae75d1e6994922010a1139ed4e542081ae693ef68"
	I1115 09:11:03.278723  369057 cri.go:89] found id: "d318e1e5a03be003b0656505a9b5868f22be48957bfabac01fe5bed18972db2e"
	I1115 09:11:03.278729  369057 cri.go:89] found id: "39e0b3ce592311497c3b7ba6ce0a39925488e7da7572d506512283e0d8063fca"
	I1115 09:11:03.278732  369057 cri.go:89] found id: "dd10873e5c8f41af2f13b58f15e33a82086a9be3e93d931927f01c44f9dff93f"
	I1115 09:11:03.278735  369057 cri.go:89] found id: "92dbc66a225a6b3141ae881bedf40b483bf5861f676f3e44774c451c6456de19"
	I1115 09:11:03.278737  369057 cri.go:89] found id: "61c26678bcffa4406ce24011e8547c2518194c6fed63b0f1f2ac4edc5bc301d6"
	I1115 09:11:03.278740  369057 cri.go:89] found id: "c485f7a9c3e2bcd6f93550926f96b7a4beb2264ed1816f766648e0dbde0ff06b"
	I1115 09:11:03.278752  369057 cri.go:89] found id: "7370b2befcb1e272cabf015a7d9b4949a4c0e07322bc08fc1f41ba708a895bd1"
	I1115 09:11:03.278761  369057 cri.go:89] found id: "79d436a219f2f3ecf9bffb38b571b76eadcecc2c1fc352a76e96db6c31f7105c"
	I1115 09:11:03.278766  369057 cri.go:89] found id: "73844762f56631c46c7beaa5090e526d0211f09d08102f6aebb6bc65b9b6abe9"
	I1115 09:11:03.278771  369057 cri.go:89] found id: "bb9cab6c50c64f3ba4dc689557ea588a6e4b8718d5a4cc5a86ca0c78a3841c9a"
	I1115 09:11:03.278775  369057 cri.go:89] found id: "ab522c42d68a89f1c8b8848fce1f9785b673459ad78234f3ebdb7bf418068641"
	I1115 09:11:03.278783  369057 cri.go:89] found id: "6dd9f12c0f48ac51bda21759251c093c132d97e7ddaa60a5122159604ef07072"
	I1115 09:11:03.278790  369057 cri.go:89] found id: "a73de86856e0eb421df37f46dab9f4a69f7908e12da368ee83d88f6f5d0a393b"
	I1115 09:11:03.278795  369057 cri.go:89] found id: "475fb5d70b55553f8bbf65caee6363f60b9bcd398854a1011303b9f281653dc8"
	I1115 09:11:03.278799  369057 cri.go:89] found id: "b4dce63e838db332022887efd42cdb924f88196851773a0224b51322724bf59f"
	I1115 09:11:03.278810  369057 cri.go:89] found id: ""
	I1115 09:11:03.278865  369057 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:11:03.293165  369057 out.go:203] 
	W1115 09:11:03.294468  369057 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:11:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:11:03.294495  369057 out.go:285] * 
	* 
	W1115 09:11:03.298549  369057 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:11:03.300029  369057 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-454747 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-838035 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-838035 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-bfskx" [75f78ee9-538b-4d5b-8299-6b28f1a8eae5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-838035 -n functional-838035
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-15 09:26:28.888319932 +0000 UTC m=+1104.187656176
functional_test.go:1645: (dbg) Run:  kubectl --context functional-838035 describe po hello-node-connect-7d85dfc575-bfskx -n default
functional_test.go:1645: (dbg) kubectl --context functional-838035 describe po hello-node-connect-7d85dfc575-bfskx -n default:
Name:             hello-node-connect-7d85dfc575-bfskx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-838035/192.168.49.2
Start Time:       Sat, 15 Nov 2025 09:16:28 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bxl22 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bxl22:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-bfskx to functional-838035
Normal   Pulling    7m5s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m5s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x22 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x22 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-838035 logs hello-node-connect-7d85dfc575-bfskx -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-838035 logs hello-node-connect-7d85dfc575-bfskx -n default: exit status 1 (63.775043ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-bfskx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-838035 logs hello-node-connect-7d85dfc575-bfskx -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-838035 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-bfskx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-838035/192.168.49.2
Start Time:       Sat, 15 Nov 2025 09:16:28 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bxl22 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bxl22:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-bfskx to functional-838035
Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x22 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x22 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-838035 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-838035 logs -l app=hello-node-connect: exit status 1 (63.198752ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-bfskx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-838035 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-838035 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.117.30
IPs:                      10.100.117.30
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31410/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-838035
helpers_test.go:243: (dbg) docker inspect functional-838035:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac8acbd58aa15324b9e60df79cee98926664e72aaf7b2aa60d4d6ba1a1caa7c6",
	        "Created": "2025-11-15T09:14:52.789292642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 382850,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:14:52.821575865Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/ac8acbd58aa15324b9e60df79cee98926664e72aaf7b2aa60d4d6ba1a1caa7c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac8acbd58aa15324b9e60df79cee98926664e72aaf7b2aa60d4d6ba1a1caa7c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac8acbd58aa15324b9e60df79cee98926664e72aaf7b2aa60d4d6ba1a1caa7c6/hosts",
	        "LogPath": "/var/lib/docker/containers/ac8acbd58aa15324b9e60df79cee98926664e72aaf7b2aa60d4d6ba1a1caa7c6/ac8acbd58aa15324b9e60df79cee98926664e72aaf7b2aa60d4d6ba1a1caa7c6-json.log",
	        "Name": "/functional-838035",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-838035:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-838035",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ac8acbd58aa15324b9e60df79cee98926664e72aaf7b2aa60d4d6ba1a1caa7c6",
	                "LowerDir": "/var/lib/docker/overlay2/9e2d3aaea703f172151d00a888fc762d9727376e4370e8f15cbd5d55cde7d233-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e2d3aaea703f172151d00a888fc762d9727376e4370e8f15cbd5d55cde7d233/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e2d3aaea703f172151d00a888fc762d9727376e4370e8f15cbd5d55cde7d233/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e2d3aaea703f172151d00a888fc762d9727376e4370e8f15cbd5d55cde7d233/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-838035",
	                "Source": "/var/lib/docker/volumes/functional-838035/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-838035",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-838035",
	                "name.minikube.sigs.k8s.io": "functional-838035",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "80dc1b9f0684dde1674492744ac1227219f3eafd10a586a97e2734d6730a2ea6",
	            "SandboxKey": "/var/run/docker/netns/80dc1b9f0684",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33155"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33156"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33157"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-838035": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "800897e3ac661e5c09ddea713319b85bcc42788bcf2c629465b1bdf403291e5c",
	                    "EndpointID": "3d5bb570f971a983bc7f7e019a9a4ec084eef7df35ba7f4a5313b6614bf3d26d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "72:ac:c9:f9:0f:a4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-838035",
	                        "ac8acbd58aa1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-838035 -n functional-838035
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-838035 logs -n 25: (1.367337248s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-838035 /tmp/TestFunctionalparallelMountCmdspecific-port319790668/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:16 UTC │                     │
	│ ssh            │ functional-838035 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:16 UTC │                     │
	│ ssh            │ functional-838035 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:16 UTC │ 15 Nov 25 09:16 UTC │
	│ ssh            │ functional-838035 ssh -- ls -la /mount-9p                                                                                        │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:16 UTC │ 15 Nov 25 09:16 UTC │
	│ ssh            │ functional-838035 ssh sudo umount -f /mount-9p                                                                                   │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:16 UTC │                     │
	│ mount          │ -p functional-838035 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4086067347/001:/mount3 --alsologtostderr -v=1               │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:16 UTC │                     │
	│ mount          │ -p functional-838035 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4086067347/001:/mount1 --alsologtostderr -v=1               │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:16 UTC │                     │
	│ ssh            │ functional-838035 ssh findmnt -T /mount1                                                                                         │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:16 UTC │                     │
	│ mount          │ -p functional-838035 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4086067347/001:/mount2 --alsologtostderr -v=1               │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:16 UTC │                     │
	│ ssh            │ functional-838035 ssh findmnt -T /mount1                                                                                         │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:16 UTC │ 15 Nov 25 09:16 UTC │
	│ ssh            │ functional-838035 ssh findmnt -T /mount2                                                                                         │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:16 UTC │ 15 Nov 25 09:16 UTC │
	│ ssh            │ functional-838035 ssh findmnt -T /mount3                                                                                         │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:16 UTC │ 15 Nov 25 09:16 UTC │
	│ mount          │ -p functional-838035 --kill=true                                                                                                 │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:16 UTC │                     │
	│ ssh            │ functional-838035 ssh sudo cat /etc/test/nested/copy/359063/hosts                                                                │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:17 UTC │ 15 Nov 25 09:17 UTC │
	│ image          │ functional-838035 image ls --format short --alsologtostderr                                                                      │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:17 UTC │ 15 Nov 25 09:17 UTC │
	│ image          │ functional-838035 image ls --format yaml --alsologtostderr                                                                       │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:17 UTC │ 15 Nov 25 09:17 UTC │
	│ ssh            │ functional-838035 ssh pgrep buildkitd                                                                                            │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:17 UTC │                     │
	│ image          │ functional-838035 image build -t localhost/my-image:functional-838035 testdata/build --alsologtostderr                           │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:17 UTC │ 15 Nov 25 09:17 UTC │
	│ image          │ functional-838035 image ls                                                                                                       │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:17 UTC │ 15 Nov 25 09:17 UTC │
	│ image          │ functional-838035 image ls --format json --alsologtostderr                                                                       │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:17 UTC │ 15 Nov 25 09:17 UTC │
	│ image          │ functional-838035 image ls --format table --alsologtostderr                                                                      │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:17 UTC │ 15 Nov 25 09:17 UTC │
	│ update-context │ functional-838035 update-context --alsologtostderr -v=2                                                                          │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:17 UTC │ 15 Nov 25 09:17 UTC │
	│ update-context │ functional-838035 update-context --alsologtostderr -v=2                                                                          │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:17 UTC │ 15 Nov 25 09:17 UTC │
	│ update-context │ functional-838035 update-context --alsologtostderr -v=2                                                                          │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:17 UTC │ 15 Nov 25 09:17 UTC │
	│ service        │ functional-838035 service list                                                                                                   │ functional-838035 │ jenkins │ v1.37.0 │ 15 Nov 25 09:26 UTC │                     │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:16:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:16:56.213164  396341 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:16:56.213309  396341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:16:56.213318  396341 out.go:374] Setting ErrFile to fd 2...
	I1115 09:16:56.213322  396341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:16:56.213614  396341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:16:56.214156  396341 out.go:368] Setting JSON to false
	I1115 09:16:56.215139  396341 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3557,"bootTime":1763194659,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:16:56.215242  396341 start.go:143] virtualization: kvm guest
	I1115 09:16:56.217204  396341 out.go:179] * [functional-838035] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1115 09:16:56.218881  396341 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:16:56.218911  396341 notify.go:221] Checking for updates...
	I1115 09:16:56.221073  396341 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:16:56.222633  396341 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:16:56.223752  396341 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 09:16:56.224880  396341 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:16:56.225935  396341 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:16:56.227643  396341 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:16:56.228470  396341 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:16:56.257313  396341 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:16:56.257422  396341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:16:56.317342  396341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-15 09:16:56.308241226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:16:56.317476  396341 docker.go:319] overlay module found
	I1115 09:16:56.321449  396341 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1115 09:16:56.322469  396341 start.go:309] selected driver: docker
	I1115 09:16:56.322486  396341 start.go:930] validating driver "docker" against &{Name:functional-838035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-838035 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:16:56.322581  396341 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:16:56.323979  396341 out.go:203] 
	W1115 09:16:56.324980  396341 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1115 09:16:56.325929  396341 out.go:203] 
	
	
	==> CRI-O <==
	Nov 15 09:17:03 functional-838035 crio[3580]: time="2025-11-15T09:17:03.008597361Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:17:03 functional-838035 crio[3580]: time="2025-11-15T09:17:03.01268262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:17:03 functional-838035 crio[3580]: time="2025-11-15T09:17:03.012896702Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3a7cb16a2a29c4261eb0530762c58d4a69471aa74199850aa34cf635d870e243/merged/etc/group: no such file or directory"
	Nov 15 09:17:03 functional-838035 crio[3580]: time="2025-11-15T09:17:03.013198588Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:17:03 functional-838035 crio[3580]: time="2025-11-15T09:17:03.037867267Z" level=info msg="Created container f9e64f17fef1a0395dd5bf92f1fca65110c00626e67605689ff98b1d36439e39: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fkcgn/dashboard-metrics-scraper" id=3bc2cc30-fb85-4233-88fa-a3108e3065f5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:17:03 functional-838035 crio[3580]: time="2025-11-15T09:17:03.03845728Z" level=info msg="Starting container: f9e64f17fef1a0395dd5bf92f1fca65110c00626e67605689ff98b1d36439e39" id=50a6d4f5-625c-4a18-9347-35f034514ac2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:17:03 functional-838035 crio[3580]: time="2025-11-15T09:17:03.040148186Z" level=info msg="Started container" PID=7108 containerID=f9e64f17fef1a0395dd5bf92f1fca65110c00626e67605689ff98b1d36439e39 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-fkcgn/dashboard-metrics-scraper id=50a6d4f5-625c-4a18-9347-35f034514ac2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d12ec0430254ab86dbe513737a5d2758c887e7ebecad89e077cf182c12188ddb
	Nov 15 09:17:09 functional-838035 crio[3580]: time="2025-11-15T09:17:09.558808458Z" level=info msg="Pulled image: docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da" id=02ce76e2-9a32-4d34-9caa-999b109cf5fd name=/runtime.v1.ImageService/PullImage
	Nov 15 09:17:09 functional-838035 crio[3580]: time="2025-11-15T09:17:09.559637567Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=78b490fb-978d-46f6-bb25-5ebac865bb42 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:17:09 functional-838035 crio[3580]: time="2025-11-15T09:17:09.561579076Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e16cf867-671e-449e-ac02-2414497855ec name=/runtime.v1.ImageService/PullImage
	Nov 15 09:17:09 functional-838035 crio[3580]: time="2025-11-15T09:17:09.561945961Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=943e03df-c73a-400a-8445-a8d3096672d3 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:17:09 functional-838035 crio[3580]: time="2025-11-15T09:17:09.56236615Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=09d6ae7b-c54a-4837-bb06-e54dc69b2d54 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:17:09 functional-838035 crio[3580]: time="2025-11-15T09:17:09.569013584Z" level=info msg="Creating container: default/mysql-5bb876957f-6dvxh/mysql" id=d8ec0f14-8f04-42ec-a91c-54330e083a0c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:17:09 functional-838035 crio[3580]: time="2025-11-15T09:17:09.569171134Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:17:09 functional-838035 crio[3580]: time="2025-11-15T09:17:09.577371826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:17:09 functional-838035 crio[3580]: time="2025-11-15T09:17:09.57818393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:17:09 functional-838035 crio[3580]: time="2025-11-15T09:17:09.616010569Z" level=info msg="Created container 2d32c25a68204e168cdf6ecafa7108eeba74bf01af333a4ad8eded8f5e567eae: default/mysql-5bb876957f-6dvxh/mysql" id=d8ec0f14-8f04-42ec-a91c-54330e083a0c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:17:09 functional-838035 crio[3580]: time="2025-11-15T09:17:09.616675838Z" level=info msg="Starting container: 2d32c25a68204e168cdf6ecafa7108eeba74bf01af333a4ad8eded8f5e567eae" id=b03c9862-718e-47f4-b2c4-262bdf986d2b name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:17:09 functional-838035 crio[3580]: time="2025-11-15T09:17:09.618460552Z" level=info msg="Started container" PID=7582 containerID=2d32c25a68204e168cdf6ecafa7108eeba74bf01af333a4ad8eded8f5e567eae description=default/mysql-5bb876957f-6dvxh/mysql id=b03c9862-718e-47f4-b2c4-262bdf986d2b name=/runtime.v1.RuntimeService/StartContainer sandboxID=d8aeb4d51ceabdb8b2c7373698a527493abac3863ed29e7c48d0b4be5bd82381
	Nov 15 09:17:51 functional-838035 crio[3580]: time="2025-11-15T09:17:51.388007595Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c1c66453-9441-4f6d-b243-f0e4700b4586 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:18:02 functional-838035 crio[3580]: time="2025-11-15T09:18:02.386524553Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5387f8a9-d017-468a-a74f-f6272847b8df name=/runtime.v1.ImageService/PullImage
	Nov 15 09:19:16 functional-838035 crio[3580]: time="2025-11-15T09:19:16.386949809Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2c468140-784e-4185-bf48-4487d97a7cff name=/runtime.v1.ImageService/PullImage
	Nov 15 09:19:23 functional-838035 crio[3580]: time="2025-11-15T09:19:23.386730257Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=530c3a9a-0f5c-4841-b954-9eed59ec4fdb name=/runtime.v1.ImageService/PullImage
	Nov 15 09:22:06 functional-838035 crio[3580]: time="2025-11-15T09:22:06.387125127Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=32702447-318b-4784-926d-f0598969399e name=/runtime.v1.ImageService/PullImage
	Nov 15 09:22:11 functional-838035 crio[3580]: time="2025-11-15T09:22:11.386954886Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=406cb0dd-ab81-46bf-a974-75625c272007 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2d32c25a68204       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   d8aeb4d51ceab       mysql-5bb876957f-6dvxh                       default
	f9e64f17fef1a       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   d12ec0430254a       dashboard-metrics-scraper-77bf4d6c4c-fkcgn   kubernetes-dashboard
	9fee1a9e45eee       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   2bab71903ab76       kubernetes-dashboard-855c9754f9-lrp4s        kubernetes-dashboard
	9f49d5cefdb55       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   4c94486006ace       busybox-mount                                default
	5fcd23d857d41       docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b                  9 minutes ago       Running             myfrontend                  0                   c383244781a05       sp-pod                                       default
	d031b9405f55a       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  9 minutes ago       Running             nginx                       0                   4089ae1bad1f7       nginx-svc                                    default
	3559f61b54404       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   c309b87e66f3b       kube-apiserver-functional-838035             kube-system
	c7012c2fb40a8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   4beb79a88fc86       kube-controller-manager-functional-838035    kube-system
	a3185eb3313b3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   d0d14965be506       etcd-functional-838035                       kube-system
	b8f9c497589be       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Created             kube-apiserver              1                   27ce35c288326       kube-apiserver-functional-838035             kube-system
	689a921fe702f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   97e4104308272       kindnet-b2ff7                                kube-system
	a60a77f4cb61f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   5cf7f44181db4       kube-scheduler-functional-838035             kube-system
	7122bf834e300       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   4beb79a88fc86       kube-controller-manager-functional-838035    kube-system
	7fd38ba1a94bb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   e58d68b9086b4       kube-proxy-lh4ht                             kube-system
	cce2da063d696       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   794ee6166679c       coredns-66bc5c9577-zgv26                     kube-system
	dbfbca1e12866       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   144f9fd27db8b       storage-provisioner                          kube-system
	b5aec29a6c674       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   794ee6166679c       coredns-66bc5c9577-zgv26                     kube-system
	c843ca28d7223       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   144f9fd27db8b       storage-provisioner                          kube-system
	fb034e05d8d6d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   97e4104308272       kindnet-b2ff7                                kube-system
	ae296c71d513d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   e58d68b9086b4       kube-proxy-lh4ht                             kube-system
	1aa603c7fd949       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   5cf7f44181db4       kube-scheduler-functional-838035             kube-system
	029e42b3d3a5d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   d0d14965be506       etcd-functional-838035                       kube-system
	
	
	==> coredns [b5aec29a6c674fa8ea059bf7a69786db6524237d136ff78a7383f3959872d7d1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57898 - 24133 "HINFO IN 1912886777599382579.1822826523542569876. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020933218s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cce2da063d696e2d6e15807a83de7ec76264da3c98b3b21b3dd6fd0d627cfc44] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37435 - 32492 "HINFO IN 5789148219824912760.7396245808479519979. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074980979s
	
	
	==> describe nodes <==
	Name:               functional-838035
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-838035
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=functional-838035
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_15_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:15:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-838035
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:26:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:24:21 +0000   Sat, 15 Nov 2025 09:15:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:24:21 +0000   Sat, 15 Nov 2025 09:15:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:24:21 +0000   Sat, 15 Nov 2025 09:15:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:24:21 +0000   Sat, 15 Nov 2025 09:15:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-838035
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                6fc20e01-8b3d-478a-9e2b-b848995ad5f3
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-2m9sw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-bfskx           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-6dvxh                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m30s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 coredns-66bc5c9577-zgv26                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-838035                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-b2ff7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-838035              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-838035     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-lh4ht                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-838035              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-fkcgn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lrp4s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-838035 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-838035 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-838035 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-838035 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-838035 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-838035 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-838035 event: Registered Node functional-838035 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-838035 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-838035 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-838035 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-838035 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-838035 event: Registered Node functional-838035 in Controller
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [029e42b3d3a5dcbdf2f7adbf459bd9e830b52446a9c5e0c48d8a9a6f5a24e5bf] <==
	{"level":"warn","ts":"2025-11-15T09:15:02.277356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:15:02.285282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:15:02.292497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:15:02.317523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:15:02.324749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:15:02.332277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:15:02.388089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37228","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T09:15:59.148908Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-15T09:15:59.148988Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-838035","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-15T09:15:59.149080Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T09:15:59.150666Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T09:15:59.150739Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:15:59.150803Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-15T09:15:59.150812Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T09:15:59.150824Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T09:15:59.150866Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T09:15:59.150886Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-15T09:15:59.150873Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-15T09:15:59.150889Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-15T09:15:59.150885Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-11-15T09:15:59.150908Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:15:59.152648Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-15T09:15:59.152695Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:15:59.152718Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-15T09:15:59.152762Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-838035","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [a3185eb3313b3afb4e26d461a35e74032297e5df7f49e29ad55109f9a1a9296b] <==
	{"level":"warn","ts":"2025-11-15T09:16:02.649270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.655673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.664249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.671566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.679381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.687409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.696565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.703096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.709864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.717259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.723836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.730414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.738812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.744969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.752223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.758562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.765302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.787677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.791120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.797072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.803220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:16:02.849673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49838","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T09:26:02.356606Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1172}
	{"level":"info","ts":"2025-11-15T09:26:02.376193Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1172,"took":"19.195022ms","hash":991425844,"current-db-size-bytes":3469312,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-11-15T09:26:02.376250Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":991425844,"revision":1172,"compact-revision":-1}
	
	
	==> kernel <==
	 09:26:30 up  1:08,  0 user,  load average: 0.02, 0.33, 1.12
	Linux functional-838035 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [689a921fe702fc96882d24268653e8eba3603f69a3b880bb7ae9bb6893d6744f] <==
	I1115 09:24:29.712381       1 main.go:301] handling current node
	I1115 09:24:39.711815       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:24:39.711855       1 main.go:301] handling current node
	I1115 09:24:49.710196       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:24:49.710228       1 main.go:301] handling current node
	I1115 09:24:59.710884       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:24:59.710927       1 main.go:301] handling current node
	I1115 09:25:09.710326       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:25:09.710360       1 main.go:301] handling current node
	I1115 09:25:19.709922       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:25:19.709991       1 main.go:301] handling current node
	I1115 09:25:29.711034       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:25:29.711069       1 main.go:301] handling current node
	I1115 09:25:39.712767       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:25:39.712825       1 main.go:301] handling current node
	I1115 09:25:49.716254       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:25:49.716290       1 main.go:301] handling current node
	I1115 09:25:59.711783       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:25:59.711827       1 main.go:301] handling current node
	I1115 09:26:09.709551       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:26:09.709593       1 main.go:301] handling current node
	I1115 09:26:19.709912       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:26:19.709961       1 main.go:301] handling current node
	I1115 09:26:29.710352       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:26:29.710446       1 main.go:301] handling current node
	
	
	==> kindnet [fb034e05d8d6d37bbf95a1c1d0818bf2d61a8ae519f6a22df262d21435fd704e] <==
	I1115 09:15:11.447340       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 09:15:11.481134       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1115 09:15:11.481290       1 main.go:148] setting mtu 1500 for CNI 
	I1115 09:15:11.481307       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 09:15:11.481327       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T09:15:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 09:15:11.684186       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 09:15:11.684206       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 09:15:11.684214       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 09:15:11.684323       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 09:15:12.084387       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 09:15:12.084443       1 metrics.go:72] Registering metrics
	I1115 09:15:12.084504       1 controller.go:711] "Syncing nftables rules"
	I1115 09:15:21.685017       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:15:21.685080       1 main.go:301] handling current node
	I1115 09:15:31.691044       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:15:31.691075       1 main.go:301] handling current node
	I1115 09:15:41.688491       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:15:41.688536       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3559f61b54404eedeec1690342baaeaf35c1e71409517b8b5b20071d95fa7ac0] <==
	I1115 09:16:03.428683       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 09:16:03.430751       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:16:04.210992       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1115 09:16:04.411461       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1115 09:16:04.412845       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 09:16:04.417735       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 09:16:04.734175       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 09:16:04.825676       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 09:16:04.877372       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 09:16:04.882968       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 09:16:07.036514       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 09:16:23.967492       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.176.12"}
	I1115 09:16:27.974711       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.108.38"}
	I1115 09:16:28.555049       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.117.30"}
	I1115 09:16:30.740600       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.169.245"}
	E1115 09:16:44.265219       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41454: use of closed network connection
	E1115 09:16:54.069781       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60460: use of closed network connection
	I1115 09:16:57.190625       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 09:16:57.317052       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.243.84"}
	I1115 09:16:57.326872       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.224.69"}
	I1115 09:17:00.467064       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.115.67"}
	E1115 09:17:15.608166       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45670: use of closed network connection
	E1115 09:17:17.003019       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45684: use of closed network connection
	E1115 09:17:18.692381       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45692: use of closed network connection
	I1115 09:26:03.220842       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-apiserver [b8f9c497589be658bd41fc3348749f18dce6176b906b957e998283423143fd03] <==
	
	
	==> kube-controller-manager [7122bf834e3000eb6f00c9be287ee4c21dec81f6aa4b0fe6c73273dbe4ceabcb] <==
	I1115 09:15:49.722257       1 serving.go:386] Generated self-signed cert in-memory
	I1115 09:15:50.160430       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1115 09:15:50.160474       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:15:50.163114       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1115 09:15:50.163118       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1115 09:15:50.163652       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1115 09:15:50.163778       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1115 09:16:00.166219       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [c7012c2fb40a812f5d5604784e2c74232046f3953dbaec96bb880047c51c8d88] <==
	I1115 09:16:06.632916       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 09:16:06.632916       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 09:16:06.634131       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 09:16:06.634172       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 09:16:06.636444       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 09:16:06.638708       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:16:06.638729       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:16:06.639863       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 09:16:06.641105       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 09:16:06.641149       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 09:16:06.641178       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 09:16:06.641182       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 09:16:06.641186       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 09:16:06.641364       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 09:16:06.644588       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 09:16:06.659854       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 09:16:06.665132       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 09:16:06.667360       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 09:16:06.669814       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	E1115 09:16:57.253124       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:16:57.260950       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:16:57.261685       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:16:57.265307       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:16:57.267754       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:16:57.271288       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [7fd38ba1a94bbc8844b4ce586b8b4e171ef02361548d20812cefa82325b6a82c] <==
	I1115 09:15:49.378588       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1115 09:15:49.379614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-838035&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:15:50.534874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-838035&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:15:52.939177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-838035&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:15:56.761700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-838035&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1115 09:16:03.479579       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:16:03.479634       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:16:03.479735       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:16:03.499096       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:16:03.499145       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:16:03.504683       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:16:03.504978       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:16:03.505004       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:16:03.506648       1 config.go:200] "Starting service config controller"
	I1115 09:16:03.506668       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:16:03.506868       1 config.go:309] "Starting node config controller"
	I1115 09:16:03.506885       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:16:03.506892       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:16:03.507066       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:16:03.507158       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:16:03.507462       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:16:03.507522       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:16:03.607697       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 09:16:03.607731       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:16:03.607760       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [ae296c71d513d67b4e2cdf14d501fc20e691b715299ce5101e5ee2cbd9557b32] <==
	I1115 09:15:11.321474       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:15:11.391424       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:15:11.491971       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:15:11.492016       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:15:11.492140       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:15:11.512347       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:15:11.512428       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:15:11.517459       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:15:11.518195       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:15:11.518233       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:15:11.520368       1 config.go:200] "Starting service config controller"
	I1115 09:15:11.520432       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:15:11.520460       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:15:11.520471       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:15:11.520535       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:15:11.520388       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:15:11.520525       1 config.go:309] "Starting node config controller"
	I1115 09:15:11.520565       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:15:11.520572       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:15:11.621570       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 09:15:11.621613       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 09:15:11.621636       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1aa603c7fd9493f9de090307b720a543c47c818c9434580da49f492f16718808] <==
	E1115 09:15:02.805659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:15:02.805660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 09:15:02.805699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 09:15:02.805728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:15:02.805749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:15:02.805820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:15:03.615528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:15:03.645071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:15:03.661525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:15:03.670083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 09:15:03.731209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:15:03.743714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 09:15:03.773950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:15:03.791145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:15:03.824792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:15:03.863978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:15:03.911511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:15:03.913439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:15:03.963489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1115 09:15:06.404056       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:15:48.526849       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:15:48.527042       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1115 09:15:48.527142       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1115 09:15:48.527153       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1115 09:15:48.527184       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a60a77f4cb61f982acd521d7908ecdb22318afeb287618ac8f6098949bda2d28] <==
	E1115 09:15:54.374633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:15:54.485633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:15:54.718133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:15:54.762782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:15:54.784271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:15:56.940231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 09:15:56.964797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:15:57.602810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:15:57.853650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 09:15:58.448338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:15:58.542212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:15:58.674141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:15:59.055617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:15:59.069026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:15:59.405688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 09:15:59.458514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:15:59.652992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:15:59.856751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:15:59.884223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:16:00.318711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:16:00.346255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:16:00.489055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:16:00.678451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:16:01.074039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1115 09:16:08.611050       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 09:23:50 functional-838035 kubelet[4144]: E1115 09:23:50.386373    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bfskx" podUID="75f78ee9-538b-4d5b-8299-6b28f1a8eae5"
	Nov 15 09:24:00 functional-838035 kubelet[4144]: E1115 09:24:00.386717    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2m9sw" podUID="79c31418-a7d8-4766-9f89-41a14e3df322"
	Nov 15 09:24:02 functional-838035 kubelet[4144]: E1115 09:24:02.386046    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bfskx" podUID="75f78ee9-538b-4d5b-8299-6b28f1a8eae5"
	Nov 15 09:24:14 functional-838035 kubelet[4144]: E1115 09:24:14.386693    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2m9sw" podUID="79c31418-a7d8-4766-9f89-41a14e3df322"
	Nov 15 09:24:17 functional-838035 kubelet[4144]: E1115 09:24:17.386928    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bfskx" podUID="75f78ee9-538b-4d5b-8299-6b28f1a8eae5"
	Nov 15 09:24:26 functional-838035 kubelet[4144]: E1115 09:24:26.385892    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2m9sw" podUID="79c31418-a7d8-4766-9f89-41a14e3df322"
	Nov 15 09:24:28 functional-838035 kubelet[4144]: E1115 09:24:28.386945    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bfskx" podUID="75f78ee9-538b-4d5b-8299-6b28f1a8eae5"
	Nov 15 09:24:38 functional-838035 kubelet[4144]: E1115 09:24:38.386077    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2m9sw" podUID="79c31418-a7d8-4766-9f89-41a14e3df322"
	Nov 15 09:24:43 functional-838035 kubelet[4144]: E1115 09:24:43.386674    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bfskx" podUID="75f78ee9-538b-4d5b-8299-6b28f1a8eae5"
	Nov 15 09:24:50 functional-838035 kubelet[4144]: E1115 09:24:50.386030    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2m9sw" podUID="79c31418-a7d8-4766-9f89-41a14e3df322"
	Nov 15 09:24:55 functional-838035 kubelet[4144]: E1115 09:24:55.386930    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bfskx" podUID="75f78ee9-538b-4d5b-8299-6b28f1a8eae5"
	Nov 15 09:25:04 functional-838035 kubelet[4144]: E1115 09:25:04.385944    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2m9sw" podUID="79c31418-a7d8-4766-9f89-41a14e3df322"
	Nov 15 09:25:08 functional-838035 kubelet[4144]: E1115 09:25:08.386807    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bfskx" podUID="75f78ee9-538b-4d5b-8299-6b28f1a8eae5"
	Nov 15 09:25:18 functional-838035 kubelet[4144]: E1115 09:25:18.386063    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2m9sw" podUID="79c31418-a7d8-4766-9f89-41a14e3df322"
	Nov 15 09:25:22 functional-838035 kubelet[4144]: E1115 09:25:22.385876    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bfskx" podUID="75f78ee9-538b-4d5b-8299-6b28f1a8eae5"
	Nov 15 09:25:31 functional-838035 kubelet[4144]: E1115 09:25:31.386338    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2m9sw" podUID="79c31418-a7d8-4766-9f89-41a14e3df322"
	Nov 15 09:25:33 functional-838035 kubelet[4144]: E1115 09:25:33.386734    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bfskx" podUID="75f78ee9-538b-4d5b-8299-6b28f1a8eae5"
	Nov 15 09:25:43 functional-838035 kubelet[4144]: E1115 09:25:43.386579    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2m9sw" podUID="79c31418-a7d8-4766-9f89-41a14e3df322"
	Nov 15 09:25:46 functional-838035 kubelet[4144]: E1115 09:25:46.386056    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bfskx" podUID="75f78ee9-538b-4d5b-8299-6b28f1a8eae5"
	Nov 15 09:25:57 functional-838035 kubelet[4144]: E1115 09:25:57.388787    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2m9sw" podUID="79c31418-a7d8-4766-9f89-41a14e3df322"
	Nov 15 09:25:59 functional-838035 kubelet[4144]: E1115 09:25:59.386341    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bfskx" podUID="75f78ee9-538b-4d5b-8299-6b28f1a8eae5"
	Nov 15 09:26:12 functional-838035 kubelet[4144]: E1115 09:26:12.385838    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2m9sw" podUID="79c31418-a7d8-4766-9f89-41a14e3df322"
	Nov 15 09:26:12 functional-838035 kubelet[4144]: E1115 09:26:12.385846    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bfskx" podUID="75f78ee9-538b-4d5b-8299-6b28f1a8eae5"
	Nov 15 09:26:25 functional-838035 kubelet[4144]: E1115 09:26:25.385974    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bfskx" podUID="75f78ee9-538b-4d5b-8299-6b28f1a8eae5"
	Nov 15 09:26:25 functional-838035 kubelet[4144]: E1115 09:26:25.385995    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2m9sw" podUID="79c31418-a7d8-4766-9f89-41a14e3df322"
	
	
	==> kubernetes-dashboard [9fee1a9e45eeec6401edc197a9352b3e0d24036491a0568700cc83376edb273f] <==
	2025/11/15 09:17:01 Using namespace: kubernetes-dashboard
	2025/11/15 09:17:01 Using in-cluster config to connect to apiserver
	2025/11/15 09:17:01 Using secret token for csrf signing
	2025/11/15 09:17:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 09:17:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 09:17:01 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 09:17:01 Generating JWE encryption key
	2025/11/15 09:17:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 09:17:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 09:17:01 Initializing JWE encryption key from synchronized object
	2025/11/15 09:17:01 Creating in-cluster Sidecar client
	2025/11/15 09:17:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 09:17:01 Serving insecurely on HTTP port: 9090
	2025/11/15 09:17:31 Successful request to sidecar
	2025/11/15 09:17:01 Starting overwatch
	
	
	==> storage-provisioner [c843ca28d7223693b15e770103f37dd74ec4b0e14d1a961f3722ebde1f3813c9] <==
	I1115 09:15:22.594493       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-838035_ae392de6-6b16-4173-9314-27d122ad4ea8!
	W1115 09:15:24.502227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:24.506513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:26.510473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:26.515296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:28.518601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:28.523018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:30.526695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:30.530717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:32.534166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:32.539031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:34.542256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:34.547257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:36.550302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:36.554776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:38.558540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:38.564184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:40.567916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:40.572097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:42.575652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:42.579858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:44.582893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:44.586613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:46.589919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:15:46.593840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dbfbca1e128663e4303c67775580e84b2b61bca6180f0b53ff67ed80601bdeac] <==
	W1115 09:26:06.756967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:08.760337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:08.765369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:10.768657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:10.773061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:12.776325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:12.780272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:14.783820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:14.787676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:16.790839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:16.795709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:18.799363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:18.803458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:20.806867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:20.811001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:22.813892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:22.818500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:24.821707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:24.825575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:26.829319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:26.834635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:28.837951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:28.843214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:30.846974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:26:30.851625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-838035 -n functional-838035
helpers_test.go:269: (dbg) Run:  kubectl --context functional-838035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-2m9sw hello-node-connect-7d85dfc575-bfskx
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-838035 describe pod busybox-mount hello-node-75c85bcc94-2m9sw hello-node-connect-7d85dfc575-bfskx
helpers_test.go:290: (dbg) kubectl --context functional-838035 describe pod busybox-mount hello-node-75c85bcc94-2m9sw hello-node-connect-7d85dfc575-bfskx:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-838035/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:16:50 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://9f49d5cefdb553b57f0c9c441e9529c2282a7e19143050c61abd1269b90ba3d9
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 15 Nov 2025 09:16:52 +0000
	      Finished:     Sat, 15 Nov 2025 09:16:52 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kqbzz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kqbzz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m41s  default-scheduler  Successfully assigned default/busybox-mount to functional-838035
	  Normal  Pulling    9m41s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m39s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.052s (2.052s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m39s  kubelet            Created container: mount-munger
	  Normal  Started    9m39s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-2m9sw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-838035/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:16:27 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cxqbh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cxqbh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-2m9sw to functional-838035
	  Normal   Pulling    7m15s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m15s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m15s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m50s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m50s (x21 over 10m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-bfskx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-838035/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:16:28 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bxl22 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bxl22:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-bfskx to functional-838035
	  Normal   Pulling    7m8s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m8s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m8s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m53s (x22 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m53s (x22 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-838035 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-838035 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-2m9sw" [79c31418-a7d8-4766-9f89-41a14e3df322] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-838035 -n functional-838035
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-15 09:26:28.309842187 +0000 UTC m=+1103.609178436
functional_test.go:1460: (dbg) Run:  kubectl --context functional-838035 describe po hello-node-75c85bcc94-2m9sw -n default
functional_test.go:1460: (dbg) kubectl --context functional-838035 describe po hello-node-75c85bcc94-2m9sw -n default:
Name:             hello-node-75c85bcc94-2m9sw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-838035/192.168.49.2
Start Time:       Sat, 15 Nov 2025 09:16:27 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cxqbh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cxqbh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-2m9sw to functional-838035
Normal   Pulling    7m12s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-838035 logs hello-node-75c85bcc94-2m9sw -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-838035 logs hello-node-75c85bcc94-2m9sw -n default: exit status 1 (70.618025ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-2m9sw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-838035 logs hello-node-75c85bcc94-2m9sw -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image load --daemon kicbase/echo-server:functional-838035 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-838035" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image load --daemon kicbase/echo-server:functional-838035 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-838035" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-838035
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image load --daemon kicbase/echo-server:functional-838035 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-838035" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image save kicbase/echo-server:functional-838035 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1115 09:16:45.922237  393596 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:16:45.922568  393596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:16:45.922579  393596 out.go:374] Setting ErrFile to fd 2...
	I1115 09:16:45.922583  393596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:16:45.922801  393596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:16:45.923406  393596 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:16:45.923499  393596 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:16:45.923853  393596 cli_runner.go:164] Run: docker container inspect functional-838035 --format={{.State.Status}}
	I1115 09:16:45.942496  393596 ssh_runner.go:195] Run: systemctl --version
	I1115 09:16:45.942545  393596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-838035
	I1115 09:16:45.959834  393596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/functional-838035/id_rsa Username:docker}
	I1115 09:16:46.052574  393596 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1115 09:16:46.052637  393596 cache_images.go:255] Failed to load cached images for "functional-838035": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1115 09:16:46.052660  393596 cache_images.go:267] failed pushing to: functional-838035

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-838035
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image save --daemon kicbase/echo-server:functional-838035 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-838035
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-838035: exit status 1 (16.909469ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-838035

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-838035

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-838035 service --namespace=default --https --url hello-node: exit status 115 (538.310079ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32014
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-838035 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-838035 service hello-node --url --format={{.IP}}: exit status 115 (544.666931ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-838035 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-838035 service hello-node --url: exit status 115 (539.467638ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32014
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-838035 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32014
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (433.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 stop --alsologtostderr -v 5
E1115 09:31:27.819633  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:31:27.826180  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:31:27.837587  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:31:27.861784  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:31:27.903868  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:31:27.985729  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:31:28.147377  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:31:28.469351  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:31:29.113559  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:31:30.394877  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:31:32.956309  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:31:38.077980  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-577290 stop --alsologtostderr -v 5: (54.89317938s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 start --wait true --alsologtostderr -v 5
E1115 09:31:48.320027  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:32:08.802018  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:32:09.623675  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:32:49.764981  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:34:11.686668  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:35:46.555441  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:36:27.819990  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:36:55.529007  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-577290 start --wait true --alsologtostderr -v 5: exit status 80 (6m16.511851875s)

                                                
                                                
-- stdout --
	* [ha-577290] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-577290" primary control-plane node in "ha-577290" cluster
	* Pulling base image v0.0.48-1761985721-21837 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-577290-m02" control-plane node in "ha-577290" cluster
	* Pulling base image v0.0.48-1761985721-21837 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-577290-m03" control-plane node in "ha-577290" cluster
	* Pulling base image v0.0.48-1761985721-21837 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	* Starting "ha-577290-m04" worker node in "ha-577290" cluster
	* Pulling base image v0.0.48-1761985721-21837 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	  - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:31:45.266575  428896 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:31:45.266886  428896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:31:45.266898  428896 out.go:374] Setting ErrFile to fd 2...
	I1115 09:31:45.266902  428896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:31:45.267163  428896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:31:45.267737  428896 out.go:368] Setting JSON to false
	I1115 09:31:45.268710  428896 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4446,"bootTime":1763194659,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:31:45.268819  428896 start.go:143] virtualization: kvm guest
	I1115 09:31:45.270819  428896 out.go:179] * [ha-577290] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:31:45.272427  428896 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:31:45.272431  428896 notify.go:221] Checking for updates...
	I1115 09:31:45.274773  428896 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:31:45.276134  428896 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:31:45.277406  428896 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 09:31:45.278544  428896 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:31:45.280004  428896 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:31:45.281655  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:31:45.281802  428896 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:31:45.305468  428896 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:31:45.305577  428896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:31:45.363884  428896 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-15 09:31:45.353980004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:31:45.363994  428896 docker.go:319] overlay module found
	I1115 09:31:45.366036  428896 out.go:179] * Using the docker driver based on existing profile
	I1115 09:31:45.367327  428896 start.go:309] selected driver: docker
	I1115 09:31:45.367347  428896 start.go:930] validating driver "docker" against &{Name:ha-577290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:31:45.367524  428896 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:31:45.367608  428896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:31:45.426878  428896 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-15 09:31:45.417064116 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:31:45.427845  428896 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:31:45.427892  428896 cni.go:84] Creating CNI manager for ""
	I1115 09:31:45.427961  428896 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1115 09:31:45.428020  428896 start.go:353] cluster config:
	{Name:ha-577290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:31:45.429910  428896 out.go:179] * Starting "ha-577290" primary control-plane node in "ha-577290" cluster
	I1115 09:31:45.431277  428896 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:31:45.432779  428896 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:31:45.434027  428896 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:31:45.434081  428896 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 09:31:45.434108  428896 cache.go:65] Caching tarball of preloaded images
	I1115 09:31:45.434157  428896 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:31:45.434217  428896 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:31:45.434231  428896 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:31:45.434406  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:31:45.454978  428896 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:31:45.455002  428896 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:31:45.455026  428896 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:31:45.455057  428896 start.go:360] acquireMachinesLock for ha-577290: {Name:mk6172d84dd1d32a54848cf1d049455806d86fc7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:31:45.455126  428896 start.go:364] duration metric: took 46.262µs to acquireMachinesLock for "ha-577290"
	I1115 09:31:45.455149  428896 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:31:45.455159  428896 fix.go:54] fixHost starting: 
	I1115 09:31:45.455379  428896 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:31:45.473405  428896 fix.go:112] recreateIfNeeded on ha-577290: state=Stopped err=<nil>
	W1115 09:31:45.473441  428896 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:31:45.475321  428896 out.go:252] * Restarting existing docker container for "ha-577290" ...
	I1115 09:31:45.475413  428896 cli_runner.go:164] Run: docker start ha-577290
	I1115 09:31:45.734297  428896 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:31:45.753588  428896 kic.go:430] container "ha-577290" state is running.
	I1115 09:31:45.753944  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290
	I1115 09:31:45.772816  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:31:45.773098  428896 machine.go:94] provisionDockerMachine start ...
	I1115 09:31:45.773176  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:45.793693  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:45.793956  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1115 09:31:45.793974  428896 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:31:45.794782  428896 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46998->127.0.0.1:33184: read: connection reset by peer
	I1115 09:31:48.924615  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290
	
	I1115 09:31:48.924669  428896 ubuntu.go:182] provisioning hostname "ha-577290"
	I1115 09:31:48.924735  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:48.943068  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:48.943339  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1115 09:31:48.943354  428896 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-577290 && echo "ha-577290" | sudo tee /etc/hostname
	I1115 09:31:49.082618  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290
	
	I1115 09:31:49.082703  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:49.100574  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:49.100818  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1115 09:31:49.100842  428896 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-577290' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-577290/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-577290' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:31:49.230624  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:31:49.230659  428896 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:31:49.230707  428896 ubuntu.go:190] setting up certificates
	I1115 09:31:49.230722  428896 provision.go:84] configureAuth start
	I1115 09:31:49.230803  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290
	I1115 09:31:49.249474  428896 provision.go:143] copyHostCerts
	I1115 09:31:49.249521  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:31:49.249578  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:31:49.249598  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:31:49.249677  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:31:49.249798  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:31:49.249825  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:31:49.249835  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:31:49.249880  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:31:49.250060  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:31:49.250160  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:31:49.250181  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:31:49.250240  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:31:49.250337  428896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.ha-577290 san=[127.0.0.1 192.168.49.2 ha-577290 localhost minikube]
	I1115 09:31:49.553270  428896 provision.go:177] copyRemoteCerts
	I1115 09:31:49.553355  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:31:49.553408  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:49.571907  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:31:49.667671  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 09:31:49.667749  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:31:49.687153  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 09:31:49.687230  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1115 09:31:49.705517  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 09:31:49.705588  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 09:31:49.723853  428896 provision.go:87] duration metric: took 493.11187ms to configureAuth
	I1115 09:31:49.723888  428896 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:31:49.724092  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:31:49.724201  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:49.742818  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:49.743043  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1115 09:31:49.743057  428896 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:31:50.033292  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:31:50.033324  428896 machine.go:97] duration metric: took 4.26020713s to provisionDockerMachine
	I1115 09:31:50.033341  428896 start.go:293] postStartSetup for "ha-577290" (driver="docker")
	I1115 09:31:50.033354  428896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:31:50.033471  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:31:50.033538  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:50.054075  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:31:50.149459  428896 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:31:50.153204  428896 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:31:50.153244  428896 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:31:50.153258  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:31:50.153313  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:31:50.153436  428896 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:31:50.153459  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /etc/ssl/certs/3590632.pem
	I1115 09:31:50.153592  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:31:50.161899  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:31:50.180230  428896 start.go:296] duration metric: took 146.870031ms for postStartSetup
	I1115 09:31:50.180319  428896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:31:50.180381  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:50.199337  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:31:50.290830  428896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:31:50.295656  428896 fix.go:56] duration metric: took 4.840490237s for fixHost
	I1115 09:31:50.295688  428896 start.go:83] releasing machines lock for "ha-577290", held for 4.840547311s
	I1115 09:31:50.295776  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290
	I1115 09:31:50.314561  428896 ssh_runner.go:195] Run: cat /version.json
	I1115 09:31:50.314634  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:50.314640  428896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:31:50.314706  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:50.333494  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:31:50.333615  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:31:50.480680  428896 ssh_runner.go:195] Run: systemctl --version
	I1115 09:31:50.487312  428896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:31:50.522567  428896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:31:50.527574  428896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:31:50.527668  428896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:31:50.536442  428896 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 09:31:50.536471  428896 start.go:496] detecting cgroup driver to use...
	I1115 09:31:50.536510  428896 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:31:50.536562  428896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:31:50.552643  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:31:50.565682  428896 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:31:50.565732  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:31:50.579797  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:31:50.592607  428896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:31:50.674494  428896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:31:50.753757  428896 docker.go:234] disabling docker service ...
	I1115 09:31:50.753838  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:31:50.768880  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:31:50.781446  428896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:31:50.862035  428896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:31:50.941863  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:31:50.955003  428896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:31:50.969531  428896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:31:50.969630  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:50.978678  428896 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:31:50.978767  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:50.987922  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:50.997554  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:51.006963  428896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:31:51.015699  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:51.024835  428896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:51.033468  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:51.042627  428896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:31:51.050076  428896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:31:51.057319  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:31:51.138979  428896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:31:51.250267  428896 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:31:51.250325  428896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:31:51.254431  428896 start.go:564] Will wait 60s for crictl version
	I1115 09:31:51.254482  428896 ssh_runner.go:195] Run: which crictl
	I1115 09:31:51.258072  428896 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:31:51.283265  428896 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:31:51.283331  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:31:51.311792  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:31:51.341627  428896 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:31:51.342956  428896 cli_runner.go:164] Run: docker network inspect ha-577290 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:31:51.361359  428896 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:31:51.365628  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:31:51.376129  428896 kubeadm.go:884] updating cluster {Name:ha-577290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:31:51.376278  428896 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:31:51.376328  428896 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:31:51.411138  428896 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:31:51.411158  428896 crio.go:433] Images already preloaded, skipping extraction
	I1115 09:31:51.411201  428896 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:31:51.438061  428896 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:31:51.438086  428896 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:31:51.438095  428896 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 09:31:51.438206  428896 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-577290 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:31:51.438283  428896 ssh_runner.go:195] Run: crio config
	I1115 09:31:51.486595  428896 cni.go:84] Creating CNI manager for ""
	I1115 09:31:51.486621  428896 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1115 09:31:51.486644  428896 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:31:51.486670  428896 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-577290 NodeName:ha-577290 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:31:51.486829  428896 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-577290"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:31:51.486855  428896 kube-vip.go:115] generating kube-vip config ...
	I1115 09:31:51.486908  428896 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 09:31:51.499329  428896 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:31:51.499466  428896 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 09:31:51.499536  428896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:31:51.507665  428896 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:31:51.507743  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1115 09:31:51.516035  428896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1115 09:31:51.528543  428896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:31:51.540903  428896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1115 09:31:51.553425  428896 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 09:31:51.566186  428896 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 09:31:51.569903  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:31:51.579760  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:31:51.657522  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:31:51.682929  428896 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290 for IP: 192.168.49.2
	I1115 09:31:51.682962  428896 certs.go:195] generating shared ca certs ...
	I1115 09:31:51.682984  428896 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:51.683252  428896 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:31:51.683303  428896 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:31:51.683316  428896 certs.go:257] generating profile certs ...
	I1115 09:31:51.683414  428896 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key
	I1115 09:31:51.683438  428896 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.7b879ecd
	I1115 09:31:51.683459  428896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt.7b879ecd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1115 09:31:51.902645  428896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt.7b879ecd ...
	I1115 09:31:51.902677  428896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt.7b879ecd: {Name:mk31504058a71e0f7602a819b395f2dc874b4f06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:51.902882  428896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.7b879ecd ...
	I1115 09:31:51.902903  428896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.7b879ecd: {Name:mk62d65624b9927bec45ce4edc59d90214e67d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:51.903010  428896 certs.go:382] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt.7b879ecd -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt
	I1115 09:31:51.903152  428896 certs.go:386] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.7b879ecd -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key
	I1115 09:31:51.903287  428896 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key
	I1115 09:31:51.903304  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 09:31:51.903316  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 09:31:51.903328  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 09:31:51.903338  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 09:31:51.903350  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 09:31:51.903360  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 09:31:51.903371  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 09:31:51.903381  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 09:31:51.903453  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:31:51.903493  428896 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:31:51.903503  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:31:51.903523  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:31:51.903545  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:31:51.903572  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:31:51.903616  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:31:51.903642  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:31:51.903656  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem -> /usr/share/ca-certificates/359063.pem
	I1115 09:31:51.903668  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /usr/share/ca-certificates/3590632.pem
	I1115 09:31:51.904202  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:31:51.923549  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:31:51.941100  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:31:51.959534  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:31:51.977478  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 09:31:51.995833  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:31:52.013950  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:31:52.032035  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 09:31:52.049984  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:31:52.068640  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:31:52.087500  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:31:52.105266  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:31:52.118376  428896 ssh_runner.go:195] Run: openssl version
	I1115 09:31:52.124566  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:31:52.133079  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:31:52.137009  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:31:52.137067  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:31:52.171540  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:31:52.180359  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:31:52.191734  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:31:52.197586  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:31:52.197656  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:31:52.238367  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:31:52.248045  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:31:52.257259  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:31:52.262431  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:31:52.262498  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:31:52.310780  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:31:52.321838  428896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:31:52.327131  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 09:31:52.384824  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 09:31:52.420556  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 09:31:52.456174  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 09:31:52.492992  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 09:31:52.527605  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 09:31:52.563847  428896 kubeadm.go:401] StartCluster: {Name:ha-577290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:31:52.564002  428896 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:31:52.564061  428896 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:31:52.598315  428896 cri.go:89] found id: "f33da4a57e7abac3ebb4c2bb796754d89a55d77cae917a4638e1dc7bb54b55b9"
	I1115 09:31:52.598342  428896 cri.go:89] found id: "6a62ffd50e27a5d8290e1041b339ee1c4011f892ee0b67e96eca3abce2936268"
	I1115 09:31:52.598346  428896 cri.go:89] found id: "98b9fc9a33f0b40586e635c881668594f59cdd960b26204a457a95a2020bd154"
	I1115 09:31:52.598352  428896 cri.go:89] found id: "bf31a867595678c370bce5d49663eec7f39f09c0ffba1367b034ab02c073ea71"
	I1115 09:31:52.598356  428896 cri.go:89] found id: "aa99d93bfb4888fbc03108f08590c503f95f20e1969eabb19d4a76ea1be94d6f"
	I1115 09:31:52.598361  428896 cri.go:89] found id: ""
	I1115 09:31:52.598433  428896 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 09:31:52.610898  428896 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:31:52Z" level=error msg="open /run/runc: no such file or directory"
	I1115 09:31:52.610984  428896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:31:52.619008  428896 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 09:31:52.619032  428896 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 09:31:52.619095  428896 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 09:31:52.626928  428896 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:31:52.627429  428896 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-577290" does not appear in /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:31:52.627702  428896 kubeconfig.go:62] /home/jenkins/minikube-integration/21895-355485/kubeconfig needs updating (will repair): [kubeconfig missing "ha-577290" cluster setting kubeconfig missing "ha-577290" context setting]
	I1115 09:31:52.628120  428896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:52.628857  428896 kapi.go:59] client config for ha-577290: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 09:31:52.629429  428896 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 09:31:52.629443  428896 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1115 09:31:52.629457  428896 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 09:31:52.629464  428896 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 09:31:52.629469  428896 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 09:31:52.629474  428896 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 09:31:52.629935  428896 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 09:31:52.638596  428896 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1115 09:31:52.638622  428896 kubeadm.go:602] duration metric: took 19.583961ms to restartPrimaryControlPlane
	I1115 09:31:52.638632  428896 kubeadm.go:403] duration metric: took 74.798878ms to StartCluster
	I1115 09:31:52.638659  428896 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:52.638739  428896 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:31:52.639509  428896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:52.639770  428896 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:31:52.639796  428896 start.go:242] waiting for startup goroutines ...
	I1115 09:31:52.639817  428896 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 09:31:52.640075  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:31:52.642696  428896 out.go:179] * Enabled addons: 
	I1115 09:31:52.643939  428896 addons.go:515] duration metric: took 4.127185ms for enable addons: enabled=[]
	I1115 09:31:52.643981  428896 start.go:247] waiting for cluster config update ...
	I1115 09:31:52.643992  428896 start.go:256] writing updated cluster config ...
	I1115 09:31:52.645418  428896 out.go:203] 
	I1115 09:31:52.646875  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:31:52.646991  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:31:52.648625  428896 out.go:179] * Starting "ha-577290-m02" control-plane node in "ha-577290" cluster
	I1115 09:31:52.649693  428896 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:31:52.651012  428896 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:31:52.652316  428896 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:31:52.652334  428896 cache.go:65] Caching tarball of preloaded images
	I1115 09:31:52.652420  428896 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:31:52.652479  428896 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:31:52.652496  428896 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:31:52.652639  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:31:52.677157  428896 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:31:52.677183  428896 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:31:52.677206  428896 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:31:52.677237  428896 start.go:360] acquireMachinesLock for ha-577290-m02: {Name:mkf112ea76ada558a569f224e46caac6b694e64c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:31:52.677308  428896 start.go:364] duration metric: took 49.241µs to acquireMachinesLock for "ha-577290-m02"
	I1115 09:31:52.677330  428896 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:31:52.677340  428896 fix.go:54] fixHost starting: m02
	I1115 09:31:52.677664  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m02 --format={{.State.Status}}
	I1115 09:31:52.698576  428896 fix.go:112] recreateIfNeeded on ha-577290-m02: state=Stopped err=<nil>
	W1115 09:31:52.698609  428896 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:31:52.700325  428896 out.go:252] * Restarting existing docker container for "ha-577290-m02" ...
	I1115 09:31:52.700427  428896 cli_runner.go:164] Run: docker start ha-577290-m02
	I1115 09:31:53.006147  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m02 --format={{.State.Status}}
	I1115 09:31:53.028889  428896 kic.go:430] container "ha-577290-m02" state is running.
	I1115 09:31:53.029347  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m02
	I1115 09:31:53.051018  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:31:53.051301  428896 machine.go:94] provisionDockerMachine start ...
	I1115 09:31:53.051366  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:53.074164  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:53.074499  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1115 09:31:53.074516  428896 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:31:53.075211  428896 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57138->127.0.0.1:33189: read: connection reset by peer
	I1115 09:31:56.207665  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m02
	
	I1115 09:31:56.207697  428896 ubuntu.go:182] provisioning hostname "ha-577290-m02"
	I1115 09:31:56.207780  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:56.232566  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:56.232897  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1115 09:31:56.232924  428896 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-577290-m02 && echo "ha-577290-m02" | sudo tee /etc/hostname
	I1115 09:31:56.391849  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m02
	
	I1115 09:31:56.391935  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:56.414665  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:56.414967  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1115 09:31:56.414995  428896 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-577290-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-577290-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-577290-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:31:56.561504  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:31:56.561540  428896 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:31:56.561563  428896 ubuntu.go:190] setting up certificates
	I1115 09:31:56.561579  428896 provision.go:84] configureAuth start
	I1115 09:31:56.561651  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m02
	I1115 09:31:56.584955  428896 provision.go:143] copyHostCerts
	I1115 09:31:56.584995  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:31:56.585033  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:31:56.585051  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:31:56.585145  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:31:56.585258  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:31:56.585290  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:31:56.585298  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:31:56.585343  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:31:56.585423  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:31:56.585444  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:31:56.585450  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:31:56.585488  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:31:56.585575  428896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.ha-577290-m02 san=[127.0.0.1 192.168.49.3 ha-577290-m02 localhost minikube]
	I1115 09:31:56.824747  428896 provision.go:177] copyRemoteCerts
	I1115 09:31:56.824826  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:31:56.824877  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:56.850475  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m02/id_rsa Username:docker}
	I1115 09:31:56.951132  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 09:31:56.951210  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:31:56.977882  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 09:31:56.977954  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:31:56.997077  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 09:31:56.997147  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1115 09:31:57.016347  428896 provision.go:87] duration metric: took 454.750366ms to configureAuth
	I1115 09:31:57.016381  428896 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:31:57.016674  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:31:57.016833  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:57.052679  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:57.053005  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1115 09:31:57.053029  428896 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:31:57.426092  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:31:57.426126  428896 machine.go:97] duration metric: took 4.374809168s to provisionDockerMachine
	I1115 09:31:57.426140  428896 start.go:293] postStartSetup for "ha-577290-m02" (driver="docker")
	I1115 09:31:57.426151  428896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:31:57.426220  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:31:57.426262  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:57.448519  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m02/id_rsa Username:docker}
	I1115 09:31:57.545209  428896 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:31:57.549384  428896 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:31:57.549439  428896 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:31:57.549452  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:31:57.549519  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:31:57.549596  428896 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:31:57.549608  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /etc/ssl/certs/3590632.pem
	I1115 09:31:57.549687  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:31:57.558189  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:31:57.580235  428896 start.go:296] duration metric: took 154.07621ms for postStartSetup
	I1115 09:31:57.580333  428896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:31:57.580386  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:57.603433  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m02/id_rsa Username:docker}
	I1115 09:31:57.701219  428896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:31:57.706336  428896 fix.go:56] duration metric: took 5.028989139s for fixHost
	I1115 09:31:57.706368  428896 start.go:83] releasing machines lock for "ha-577290-m02", held for 5.029048241s
	I1115 09:31:57.706470  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m02
	I1115 09:31:57.727402  428896 out.go:179] * Found network options:
	I1115 09:31:57.728724  428896 out.go:179]   - NO_PROXY=192.168.49.2
	W1115 09:31:57.729967  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:31:57.730005  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 09:31:57.730073  428896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:31:57.730128  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:57.730159  428896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:31:57.730230  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:57.748817  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m02/id_rsa Username:docker}
	I1115 09:31:57.750362  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m02/id_rsa Username:docker}
	I1115 09:31:57.903068  428896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:31:57.937805  428896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:31:57.937874  428896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:31:57.947024  428896 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 09:31:57.947053  428896 start.go:496] detecting cgroup driver to use...
	I1115 09:31:57.947136  428896 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:31:57.947208  428896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:31:57.963666  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:31:57.976613  428896 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:31:57.976675  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:31:57.991891  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:31:58.006003  428896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:31:58.153545  428896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:31:58.310509  428896 docker.go:234] disabling docker service ...
	I1115 09:31:58.310582  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:31:58.330775  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:31:58.348091  428896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:31:58.501312  428896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:31:58.629095  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:31:58.643176  428896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:31:58.658526  428896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:31:58.658590  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.668426  428896 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:31:58.668483  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.679145  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.689023  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.698596  428896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:31:58.707252  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.717022  428896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.726715  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.735906  428896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:31:58.743685  428896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:31:58.751568  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:31:58.887672  428896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:33:29.141191  428896 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.253455227s)
	I1115 09:33:29.141240  428896 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:33:29.141300  428896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:33:29.145595  428896 start.go:564] Will wait 60s for crictl version
	I1115 09:33:29.145655  428896 ssh_runner.go:195] Run: which crictl
	I1115 09:33:29.149342  428896 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:33:29.174182  428896 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:33:29.174254  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:29.204881  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:29.236181  428896 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:33:29.237785  428896 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 09:33:29.239150  428896 cli_runner.go:164] Run: docker network inspect ha-577290 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:33:29.257605  428896 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:33:29.262168  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:29.273241  428896 mustload.go:66] Loading cluster: ha-577290
	I1115 09:33:29.273540  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:29.273770  428896 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:33:29.291615  428896 host.go:66] Checking if "ha-577290" exists ...
	I1115 09:33:29.291888  428896 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290 for IP: 192.168.49.3
	I1115 09:33:29.291900  428896 certs.go:195] generating shared ca certs ...
	I1115 09:33:29.291916  428896 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:29.292078  428896 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:33:29.292119  428896 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:33:29.292129  428896 certs.go:257] generating profile certs ...
	I1115 09:33:29.292200  428896 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key
	I1115 09:33:29.292255  428896 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.c5636f69
	I1115 09:33:29.292289  428896 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key
	I1115 09:33:29.292300  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 09:33:29.292314  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 09:33:29.292326  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 09:33:29.292338  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 09:33:29.292352  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 09:33:29.292367  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 09:33:29.292387  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 09:33:29.292421  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 09:33:29.292481  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:33:29.292511  428896 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:33:29.292522  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:33:29.292544  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:33:29.292568  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:33:29.292596  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:33:29.292645  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:33:29.292674  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:29.292685  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem -> /usr/share/ca-certificates/359063.pem
	I1115 09:33:29.292705  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /usr/share/ca-certificates/3590632.pem
	I1115 09:33:29.292756  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:33:29.311158  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:33:29.397746  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1115 09:33:29.402107  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1115 09:33:29.410807  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1115 09:33:29.414570  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1115 09:33:29.423209  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1115 09:33:29.426969  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1115 09:33:29.435369  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1115 09:33:29.439110  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1115 09:33:29.447938  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1115 09:33:29.451581  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1115 09:33:29.460040  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1115 09:33:29.463847  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1115 09:33:29.472802  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:33:29.491640  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:33:29.509789  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:33:29.527041  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:33:29.544384  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 09:33:29.562153  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:33:29.580258  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:33:29.598677  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 09:33:29.616730  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:33:29.635496  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:33:29.653811  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:33:29.671993  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1115 09:33:29.684693  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1115 09:33:29.697982  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1115 09:33:29.710750  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1115 09:33:29.723405  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1115 09:33:29.735786  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1115 09:33:29.748861  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1115 09:33:29.761801  428896 ssh_runner.go:195] Run: openssl version
	I1115 09:33:29.768042  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:33:29.777574  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:33:29.781659  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:33:29.781740  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:33:29.817272  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:33:29.826567  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:33:29.836067  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:29.839987  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:29.840045  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:29.875123  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:33:29.884911  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:33:29.893650  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:33:29.897547  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:33:29.897614  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:33:29.933220  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:33:29.942015  428896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:33:29.946107  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 09:33:29.981924  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 09:33:30.017346  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 09:33:30.055728  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 09:33:30.091801  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 09:33:30.128083  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 09:33:30.165477  428896 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1115 09:33:30.165602  428896 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-577290-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:33:30.165633  428896 kube-vip.go:115] generating kube-vip config ...
	I1115 09:33:30.165686  428896 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 09:33:30.178477  428896 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:33:30.178550  428896 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 09:33:30.178626  428896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:33:30.187181  428896 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:33:30.187255  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1115 09:33:30.195966  428896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:33:30.209403  428896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:33:30.222151  428896 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 09:33:30.235250  428896 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 09:33:30.239303  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:30.249724  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:30.355117  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:30.368971  428896 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:33:30.369229  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:30.370723  428896 out.go:179] * Verifying Kubernetes components...
	I1115 09:33:30.372269  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:30.476752  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:30.491166  428896 kapi.go:59] client config for ha-577290: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 09:33:30.491243  428896 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 09:33:30.491612  428896 node_ready.go:35] waiting up to 6m0s for node "ha-577290-m02" to be "Ready" ...
	W1115 09:33:32.494974  428896 node_ready.go:57] node "ha-577290-m02" has "Ready":"False" status (will retry)
	W1115 09:33:34.495865  428896 node_ready.go:57] node "ha-577290-m02" has "Ready":"False" status (will retry)
	W1115 09:33:36.995901  428896 node_ready.go:57] node "ha-577290-m02" has "Ready":"False" status (will retry)
	W1115 09:33:39.495623  428896 node_ready.go:57] node "ha-577290-m02" has "Ready":"False" status (will retry)
	I1115 09:33:40.495728  428896 node_ready.go:49] node "ha-577290-m02" is "Ready"
	I1115 09:33:40.495762  428896 node_ready.go:38] duration metric: took 10.004119226s for node "ha-577290-m02" to be "Ready" ...
	I1115 09:33:40.495779  428896 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:33:40.495830  428896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:33:40.508005  428896 api_server.go:72] duration metric: took 10.138962389s to wait for apiserver process to appear ...
	I1115 09:33:40.508034  428896 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:33:40.508058  428896 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 09:33:40.513137  428896 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 09:33:40.514147  428896 api_server.go:141] control plane version: v1.34.1
	I1115 09:33:40.514171  428896 api_server.go:131] duration metric: took 6.130383ms to wait for apiserver health ...
	I1115 09:33:40.514180  428896 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:33:40.521806  428896 system_pods.go:59] 26 kube-system pods found
	I1115 09:33:40.521847  428896 system_pods.go:61] "coredns-66bc5c9577-hcps6" [61783521-de69-4669-874c-b0a260551902] Running
	I1115 09:33:40.521853  428896 system_pods.go:61] "coredns-66bc5c9577-xqpdq" [929b4b9a-8741-413f-939e-68c92781b1eb] Running
	I1115 09:33:40.521857  428896 system_pods.go:61] "etcd-ha-577290" [3ab153af-3774-4da4-a72e-323d14056944] Running
	I1115 09:33:40.521860  428896 system_pods.go:61] "etcd-ha-577290-m02" [146e26b0-996a-4cf6-a1ac-4e50fc799d1e] Running
	I1115 09:33:40.521865  428896 system_pods.go:61] "etcd-ha-577290-m03" [c61afa72-7aa1-42b1-9844-ae2295e52813] Running
	I1115 09:33:40.521868  428896 system_pods.go:61] "kindnet-7xtwk" [82d2cc3a-bb9c-4fdd-8975-8c804cc2c4d3] Running
	I1115 09:33:40.521871  428896 system_pods.go:61] "kindnet-dsj4t" [73dc267e-1872-43d0-97a0-6dfffe4327ab] Running
	I1115 09:33:40.521877  428896 system_pods.go:61] "kindnet-k8kmn" [350338b0-7cd1-4a6e-8608-b9b16b4a5cac] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 09:33:40.521888  428896 system_pods.go:61] "kindnet-ltfl5" [d3873196-930a-44bb-87f0-684c93025bdc] Running
	I1115 09:33:40.521903  428896 system_pods.go:61] "kube-apiserver-ha-577290" [a23f028c-3c3b-4b50-a859-2624a47cf37e] Running
	I1115 09:33:40.521907  428896 system_pods.go:61] "kube-apiserver-ha-577290-m02" [d6fb6ef6-4266-45e7-93c3-76c5ff31c0c5] Running
	I1115 09:33:40.521910  428896 system_pods.go:61] "kube-apiserver-ha-577290-m03" [23b73095-c581-4178-be4c-26dd08f8d4dc] Running
	I1115 09:33:40.521913  428896 system_pods.go:61] "kube-controller-manager-ha-577290" [f28c8e92-79ec-45ba-87a1-f07151431d5c] Running
	I1115 09:33:40.521917  428896 system_pods.go:61] "kube-controller-manager-ha-577290-m02" [8daa249c-7866-4ad3-bd2f-aa94ef222eb7] Running
	I1115 09:33:40.521922  428896 system_pods.go:61] "kube-controller-manager-ha-577290-m03" [53c1116a-ca9c-4f6a-a317-2159d25ae09c] Running
	I1115 09:33:40.521926  428896 system_pods.go:61] "kube-proxy-4j6b5" [67899ff8-aa1a-41d8-b7a3-4fea91a10fa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 09:33:40.521929  428896 system_pods.go:61] "kube-proxy-6mkwq" [e2ddd593-d255-4f3d-b008-72b920167540] Running
	I1115 09:33:40.521932  428896 system_pods.go:61] "kube-proxy-k6gmr" [9f25b23c-212d-4987-9d75-335a513ad8c2] Running
	I1115 09:33:40.521935  428896 system_pods.go:61] "kube-proxy-zkk5v" [57c4c9d1-9a69-4190-a1cc-0036d422972c] Running
	I1115 09:33:40.521938  428896 system_pods.go:61] "kube-scheduler-ha-577290" [09b6d338-2eb4-469c-ae21-a8e58b9c4622] Running
	I1115 09:33:40.521941  428896 system_pods.go:61] "kube-scheduler-ha-577290-m02" [7b3d6e56-319c-492f-8197-fb4c6c883fed] Running
	I1115 09:33:40.521943  428896 system_pods.go:61] "kube-scheduler-ha-577290-m03" [6d9b1eb9-2fa8-4bd5-b0a2-fa1b45c93b7e] Running
	I1115 09:33:40.521947  428896 system_pods.go:61] "kube-vip-ha-577290" [b451c58a-b25d-4697-b9c5-7e2fc03cea67] Running
	I1115 09:33:40.521951  428896 system_pods.go:61] "kube-vip-ha-577290-m02" [057ddd08-41fa-4738-a72c-a91a4e004fb1] Running
	I1115 09:33:40.521953  428896 system_pods.go:61] "kube-vip-ha-577290-m03" [7aaee1aa-2771-45e7-b0af-5c28f8c8a227] Running
	I1115 09:33:40.521956  428896 system_pods.go:61] "storage-provisioner" [c6bdc68a-8f6a-4b01-a166-66128641846b] Running
	I1115 09:33:40.521962  428896 system_pods.go:74] duration metric: took 7.776979ms to wait for pod list to return data ...
	I1115 09:33:40.521973  428896 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:33:40.525281  428896 default_sa.go:45] found service account: "default"
	I1115 09:33:40.525304  428896 default_sa.go:55] duration metric: took 3.325885ms for default service account to be created ...
	I1115 09:33:40.525314  428896 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:33:40.532899  428896 system_pods.go:86] 26 kube-system pods found
	I1115 09:33:40.532942  428896 system_pods.go:89] "coredns-66bc5c9577-hcps6" [61783521-de69-4669-874c-b0a260551902] Running
	I1115 09:33:40.532948  428896 system_pods.go:89] "coredns-66bc5c9577-xqpdq" [929b4b9a-8741-413f-939e-68c92781b1eb] Running
	I1115 09:33:40.532952  428896 system_pods.go:89] "etcd-ha-577290" [3ab153af-3774-4da4-a72e-323d14056944] Running
	I1115 09:33:40.532955  428896 system_pods.go:89] "etcd-ha-577290-m02" [146e26b0-996a-4cf6-a1ac-4e50fc799d1e] Running
	I1115 09:33:40.532958  428896 system_pods.go:89] "etcd-ha-577290-m03" [c61afa72-7aa1-42b1-9844-ae2295e52813] Running
	I1115 09:33:40.532962  428896 system_pods.go:89] "kindnet-7xtwk" [82d2cc3a-bb9c-4fdd-8975-8c804cc2c4d3] Running
	I1115 09:33:40.532965  428896 system_pods.go:89] "kindnet-dsj4t" [73dc267e-1872-43d0-97a0-6dfffe4327ab] Running
	I1115 09:33:40.532972  428896 system_pods.go:89] "kindnet-k8kmn" [350338b0-7cd1-4a6e-8608-b9b16b4a5cac] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 09:33:40.532980  428896 system_pods.go:89] "kindnet-ltfl5" [d3873196-930a-44bb-87f0-684c93025bdc] Running
	I1115 09:33:40.532985  428896 system_pods.go:89] "kube-apiserver-ha-577290" [a23f028c-3c3b-4b50-a859-2624a47cf37e] Running
	I1115 09:33:40.532988  428896 system_pods.go:89] "kube-apiserver-ha-577290-m02" [d6fb6ef6-4266-45e7-93c3-76c5ff31c0c5] Running
	I1115 09:33:40.532991  428896 system_pods.go:89] "kube-apiserver-ha-577290-m03" [23b73095-c581-4178-be4c-26dd08f8d4dc] Running
	I1115 09:33:40.532997  428896 system_pods.go:89] "kube-controller-manager-ha-577290" [f28c8e92-79ec-45ba-87a1-f07151431d5c] Running
	I1115 09:33:40.533001  428896 system_pods.go:89] "kube-controller-manager-ha-577290-m02" [8daa249c-7866-4ad3-bd2f-aa94ef222eb7] Running
	I1115 09:33:40.533007  428896 system_pods.go:89] "kube-controller-manager-ha-577290-m03" [53c1116a-ca9c-4f6a-a317-2159d25ae09c] Running
	I1115 09:33:40.533012  428896 system_pods.go:89] "kube-proxy-4j6b5" [67899ff8-aa1a-41d8-b7a3-4fea91a10fa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 09:33:40.533018  428896 system_pods.go:89] "kube-proxy-6mkwq" [e2ddd593-d255-4f3d-b008-72b920167540] Running
	I1115 09:33:40.533022  428896 system_pods.go:89] "kube-proxy-k6gmr" [9f25b23c-212d-4987-9d75-335a513ad8c2] Running
	I1115 09:33:40.533027  428896 system_pods.go:89] "kube-proxy-zkk5v" [57c4c9d1-9a69-4190-a1cc-0036d422972c] Running
	I1115 09:33:40.533030  428896 system_pods.go:89] "kube-scheduler-ha-577290" [09b6d338-2eb4-469c-ae21-a8e58b9c4622] Running
	I1115 09:33:40.533033  428896 system_pods.go:89] "kube-scheduler-ha-577290-m02" [7b3d6e56-319c-492f-8197-fb4c6c883fed] Running
	I1115 09:33:40.533036  428896 system_pods.go:89] "kube-scheduler-ha-577290-m03" [6d9b1eb9-2fa8-4bd5-b0a2-fa1b45c93b7e] Running
	I1115 09:33:40.533039  428896 system_pods.go:89] "kube-vip-ha-577290" [b451c58a-b25d-4697-b9c5-7e2fc03cea67] Running
	I1115 09:33:40.533042  428896 system_pods.go:89] "kube-vip-ha-577290-m02" [057ddd08-41fa-4738-a72c-a91a4e004fb1] Running
	I1115 09:33:40.533047  428896 system_pods.go:89] "kube-vip-ha-577290-m03" [7aaee1aa-2771-45e7-b0af-5c28f8c8a227] Running
	I1115 09:33:40.533052  428896 system_pods.go:89] "storage-provisioner" [c6bdc68a-8f6a-4b01-a166-66128641846b] Running
	I1115 09:33:40.533059  428896 system_pods.go:126] duration metric: took 7.740388ms to wait for k8s-apps to be running ...
	I1115 09:33:40.533069  428896 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:33:40.533115  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:33:40.546948  428896 system_svc.go:56] duration metric: took 13.851414ms WaitForService to wait for kubelet
	I1115 09:33:40.546981  428896 kubeadm.go:587] duration metric: took 10.17796689s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:33:40.547004  428896 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:33:40.550887  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:40.550928  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:40.550955  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:40.550959  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:40.550963  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:40.550966  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:40.550969  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:40.550972  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:40.550976  428896 node_conditions.go:105] duration metric: took 3.967331ms to run NodePressure ...
	I1115 09:33:40.550987  428896 start.go:242] waiting for startup goroutines ...
	I1115 09:33:40.551013  428896 start.go:256] writing updated cluster config ...
	I1115 09:33:40.553290  428896 out.go:203] 
	I1115 09:33:40.555010  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:40.555154  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:40.556732  428896 out.go:179] * Starting "ha-577290-m03" control-plane node in "ha-577290" cluster
	I1115 09:33:40.558293  428896 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:33:40.559533  428896 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:33:40.560557  428896 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:33:40.560573  428896 cache.go:65] Caching tarball of preloaded images
	I1115 09:33:40.560658  428896 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:33:40.560677  428896 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:33:40.560686  428896 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:33:40.560802  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:40.581841  428896 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:33:40.581862  428896 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:33:40.581881  428896 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:33:40.581911  428896 start.go:360] acquireMachinesLock for ha-577290-m03: {Name:mk956e932a0a61462f744b4bf6dccfcc158f1677 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:33:40.581975  428896 start.go:364] duration metric: took 45.083µs to acquireMachinesLock for "ha-577290-m03"
	I1115 09:33:40.582000  428896 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:33:40.582009  428896 fix.go:54] fixHost starting: m03
	I1115 09:33:40.582213  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m03 --format={{.State.Status}}
	I1115 09:33:40.599708  428896 fix.go:112] recreateIfNeeded on ha-577290-m03: state=Stopped err=<nil>
	W1115 09:33:40.599741  428896 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:33:40.601856  428896 out.go:252] * Restarting existing docker container for "ha-577290-m03" ...
	I1115 09:33:40.601929  428896 cli_runner.go:164] Run: docker start ha-577290-m03
	I1115 09:33:40.883039  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m03 --format={{.State.Status}}
	I1115 09:33:40.902259  428896 kic.go:430] container "ha-577290-m03" state is running.
	I1115 09:33:40.902730  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m03
	I1115 09:33:40.923104  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:40.923365  428896 machine.go:94] provisionDockerMachine start ...
	I1115 09:33:40.923449  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:40.942829  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:40.943125  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1115 09:33:40.943143  428896 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:33:40.943747  428896 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45594->127.0.0.1:33194: read: connection reset by peer
	I1115 09:33:44.097198  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m03
	
	I1115 09:33:44.097227  428896 ubuntu.go:182] provisioning hostname "ha-577290-m03"
	I1115 09:33:44.097294  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:44.119447  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:44.119771  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1115 09:33:44.119790  428896 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-577290-m03 && echo "ha-577290-m03" | sudo tee /etc/hostname
	I1115 09:33:44.272682  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m03
	
	I1115 09:33:44.272754  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:44.292482  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:44.292709  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1115 09:33:44.292725  428896 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-577290-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-577290-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-577290-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:33:44.427118  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:33:44.427153  428896 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:33:44.427180  428896 ubuntu.go:190] setting up certificates
	I1115 09:33:44.427192  428896 provision.go:84] configureAuth start
	I1115 09:33:44.427251  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m03
	I1115 09:33:44.449125  428896 provision.go:143] copyHostCerts
	I1115 09:33:44.449170  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:33:44.449207  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:33:44.449220  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:33:44.449315  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:33:44.449479  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:33:44.449519  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:33:44.449527  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:33:44.449580  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:33:44.449658  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:33:44.449684  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:33:44.449692  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:33:44.449729  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:33:44.449848  428896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.ha-577290-m03 san=[127.0.0.1 192.168.49.4 ha-577290-m03 localhost minikube]
	I1115 09:33:44.532362  428896 provision.go:177] copyRemoteCerts
	I1115 09:33:44.532433  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:33:44.532473  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:44.550652  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:33:44.646162  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 09:33:44.646224  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:33:44.664161  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 09:33:44.664221  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:33:44.683656  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 09:33:44.683729  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 09:33:44.709533  428896 provision.go:87] duration metric: took 282.323517ms to configureAuth
	I1115 09:33:44.709568  428896 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:33:44.709953  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:44.710431  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:44.730924  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:44.731134  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1115 09:33:44.731151  428896 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:33:45.072969  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:33:45.073007  428896 machine.go:97] duration metric: took 4.149624743s to provisionDockerMachine
	I1115 09:33:45.073028  428896 start.go:293] postStartSetup for "ha-577290-m03" (driver="docker")
	I1115 09:33:45.073041  428896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:33:45.073117  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:33:45.073164  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:45.096852  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:33:45.197468  428896 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:33:45.201750  428896 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:33:45.201783  428896 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:33:45.201797  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:33:45.201858  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:33:45.201951  428896 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:33:45.201963  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /etc/ssl/certs/3590632.pem
	I1115 09:33:45.202075  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:33:45.210217  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:33:45.228458  428896 start.go:296] duration metric: took 155.41494ms for postStartSetup
	I1115 09:33:45.228526  428896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:33:45.228575  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:45.246932  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:33:45.337973  428896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:33:45.343136  428896 fix.go:56] duration metric: took 4.76111959s for fixHost
	I1115 09:33:45.343165  428896 start.go:83] releasing machines lock for "ha-577290-m03", held for 4.761175125s
	I1115 09:33:45.343237  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m03
	I1115 09:33:45.363267  428896 out.go:179] * Found network options:
	I1115 09:33:45.364603  428896 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1115 09:33:45.365919  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:45.365945  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:45.365965  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:45.365973  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 09:33:45.366049  428896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:33:45.366084  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:45.366197  428896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:33:45.366269  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:45.385469  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:33:45.385900  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:33:45.512144  428896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:33:45.539108  428896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:33:45.539183  428896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:33:45.548657  428896 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 09:33:45.548681  428896 start.go:496] detecting cgroup driver to use...
	I1115 09:33:45.548714  428896 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:33:45.548758  428896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:33:45.565030  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:33:45.578828  428896 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:33:45.578876  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:33:45.593659  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:33:45.606896  428896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:33:45.719282  428896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:33:45.833886  428896 docker.go:234] disabling docker service ...
	I1115 09:33:45.833972  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:33:45.849553  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:33:45.863178  428896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:33:46.002558  428896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:33:46.122751  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:33:46.135787  428896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:33:46.152335  428896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:33:46.152386  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.162211  428896 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:33:46.162288  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.172907  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.182146  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.191787  428896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:33:46.201198  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.211208  428896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.221525  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.231770  428896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:33:46.240242  428896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:33:46.248568  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:46.362978  428896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:33:46.529312  428896 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:33:46.529407  428896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:33:46.534021  428896 start.go:564] Will wait 60s for crictl version
	I1115 09:33:46.534084  428896 ssh_runner.go:195] Run: which crictl
	I1115 09:33:46.537777  428896 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:33:46.562624  428896 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:33:46.562720  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:46.593038  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:46.624612  428896 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:33:46.625782  428896 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 09:33:46.626701  428896 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1115 09:33:46.627918  428896 cli_runner.go:164] Run: docker network inspect ha-577290 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:33:46.647913  428896 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:33:46.652309  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:46.663365  428896 mustload.go:66] Loading cluster: ha-577290
	I1115 09:33:46.663617  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:46.663854  428896 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:33:46.683967  428896 host.go:66] Checking if "ha-577290" exists ...
	I1115 09:33:46.684227  428896 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290 for IP: 192.168.49.4
	I1115 09:33:46.684240  428896 certs.go:195] generating shared ca certs ...
	I1115 09:33:46.684254  428896 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:46.684373  428896 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:33:46.684442  428896 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:33:46.684456  428896 certs.go:257] generating profile certs ...
	I1115 09:33:46.684531  428896 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key
	I1115 09:33:46.684570  428896 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.4e419922
	I1115 09:33:46.684607  428896 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key
	I1115 09:33:46.684619  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 09:33:46.684635  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 09:33:46.684648  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 09:33:46.684658  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 09:33:46.684670  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 09:33:46.684682  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 09:33:46.684694  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 09:33:46.684703  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 09:33:46.684763  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:33:46.684793  428896 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:33:46.684803  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:33:46.684825  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:33:46.684845  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:33:46.684867  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:33:46.684981  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:33:46.685022  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem -> /usr/share/ca-certificates/359063.pem
	I1115 09:33:46.685039  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /usr/share/ca-certificates/3590632.pem
	I1115 09:33:46.685052  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:46.685102  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:33:46.704190  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:33:46.792775  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1115 09:33:46.797318  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1115 09:33:46.806208  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1115 09:33:46.810016  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1115 09:33:46.819830  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1115 09:33:46.823486  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1115 09:33:46.831939  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1115 09:33:46.835879  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1115 09:33:46.844637  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1115 09:33:46.848667  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1115 09:33:46.857507  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1115 09:33:46.861254  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1115 09:33:46.870691  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:33:46.890068  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:33:46.908762  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:33:46.928604  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:33:46.946771  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 09:33:46.966008  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:33:46.985099  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:33:47.004286  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 09:33:47.023701  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:33:47.044426  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:33:47.063586  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:33:47.083517  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1115 09:33:47.097148  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1115 09:33:47.110614  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1115 09:33:47.125289  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1115 09:33:47.139218  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1115 09:33:47.152613  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1115 09:33:47.167316  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1115 09:33:47.186848  428896 ssh_runner.go:195] Run: openssl version
	I1115 09:33:47.196607  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:33:47.208413  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:47.212323  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:47.212377  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:47.248988  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:33:47.257951  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:33:47.270018  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:33:47.276511  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:33:47.276612  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:33:47.315123  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:33:47.324272  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:33:47.333692  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:33:47.337850  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:33:47.337904  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:33:47.377447  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:33:47.386605  428896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:33:47.390885  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 09:33:47.428238  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 09:33:47.463635  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 09:33:47.500538  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 09:33:47.537928  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 09:33:47.573729  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 09:33:47.608297  428896 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1115 09:33:47.608438  428896 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-577290-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:33:47.608465  428896 kube-vip.go:115] generating kube-vip config ...
	I1115 09:33:47.608505  428896 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 09:33:47.621813  428896 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:33:47.621905  428896 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 09:33:47.621980  428896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:33:47.629857  428896 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:33:47.629945  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1115 09:33:47.638232  428896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:33:47.652261  428896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:33:47.666706  428896 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 09:33:47.681044  428896 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 09:33:47.685137  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:47.696618  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:47.811257  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:47.825255  428896 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:33:47.825603  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:47.827569  428896 out.go:179] * Verifying Kubernetes components...
	I1115 09:33:47.828637  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:47.945833  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:47.960377  428896 kapi.go:59] client config for ha-577290: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 09:33:47.960507  428896 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 09:33:47.960779  428896 node_ready.go:35] waiting up to 6m0s for node "ha-577290-m03" to be "Ready" ...
	I1115 09:33:47.964177  428896 node_ready.go:49] node "ha-577290-m03" is "Ready"
	I1115 09:33:47.964207  428896 node_ready.go:38] duration metric: took 3.409493ms for node "ha-577290-m03" to be "Ready" ...
	I1115 09:33:47.964220  428896 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:33:47.964274  428896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:33:47.976501  428896 api_server.go:72] duration metric: took 151.188832ms to wait for apiserver process to appear ...
	I1115 09:33:47.976526  428896 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:33:47.976549  428896 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 09:33:47.982576  428896 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 09:33:47.983614  428896 api_server.go:141] control plane version: v1.34.1
	I1115 09:33:47.983645  428896 api_server.go:131] duration metric: took 7.111217ms to wait for apiserver health ...
	I1115 09:33:47.983656  428896 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:33:47.990372  428896 system_pods.go:59] 26 kube-system pods found
	I1115 09:33:47.990422  428896 system_pods.go:61] "coredns-66bc5c9577-hcps6" [61783521-de69-4669-874c-b0a260551902] Running
	I1115 09:33:47.990429  428896 system_pods.go:61] "coredns-66bc5c9577-xqpdq" [929b4b9a-8741-413f-939e-68c92781b1eb] Running
	I1115 09:33:47.990435  428896 system_pods.go:61] "etcd-ha-577290" [3ab153af-3774-4da4-a72e-323d14056944] Running
	I1115 09:33:47.990441  428896 system_pods.go:61] "etcd-ha-577290-m02" [146e26b0-996a-4cf6-a1ac-4e50fc799d1e] Running
	I1115 09:33:47.990450  428896 system_pods.go:61] "etcd-ha-577290-m03" [c61afa72-7aa1-42b1-9844-ae2295e52813] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 09:33:47.990461  428896 system_pods.go:61] "kindnet-7xtwk" [82d2cc3a-bb9c-4fdd-8975-8c804cc2c4d3] Running
	I1115 09:33:47.990470  428896 system_pods.go:61] "kindnet-dsj4t" [73dc267e-1872-43d0-97a0-6dfffe4327ab] Running
	I1115 09:33:47.990481  428896 system_pods.go:61] "kindnet-k8kmn" [350338b0-7cd1-4a6e-8608-b9b16b4a5cac] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 09:33:47.990487  428896 system_pods.go:61] "kindnet-ltfl5" [d3873196-930a-44bb-87f0-684c93025bdc] Running
	I1115 09:33:47.990493  428896 system_pods.go:61] "kube-apiserver-ha-577290" [a23f028c-3c3b-4b50-a859-2624a47cf37e] Running
	I1115 09:33:47.990498  428896 system_pods.go:61] "kube-apiserver-ha-577290-m02" [d6fb6ef6-4266-45e7-93c3-76c5ff31c0c5] Running
	I1115 09:33:47.990505  428896 system_pods.go:61] "kube-apiserver-ha-577290-m03" [23b73095-c581-4178-be4c-26dd08f8d4dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 09:33:47.990511  428896 system_pods.go:61] "kube-controller-manager-ha-577290" [f28c8e92-79ec-45ba-87a1-f07151431d5c] Running
	I1115 09:33:47.990517  428896 system_pods.go:61] "kube-controller-manager-ha-577290-m02" [8daa249c-7866-4ad3-bd2f-aa94ef222eb7] Running
	I1115 09:33:47.990526  428896 system_pods.go:61] "kube-controller-manager-ha-577290-m03" [53c1116a-ca9c-4f6a-a317-2159d25ae09c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 09:33:47.990535  428896 system_pods.go:61] "kube-proxy-4j6b5" [67899ff8-aa1a-41d8-b7a3-4fea91a10fa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 09:33:47.990541  428896 system_pods.go:61] "kube-proxy-6mkwq" [e2ddd593-d255-4f3d-b008-72b920167540] Running
	I1115 09:33:47.990544  428896 system_pods.go:61] "kube-proxy-k6gmr" [9f25b23c-212d-4987-9d75-335a513ad8c2] Running
	I1115 09:33:47.990549  428896 system_pods.go:61] "kube-proxy-zkk5v" [57c4c9d1-9a69-4190-a1cc-0036d422972c] Running
	I1115 09:33:47.990557  428896 system_pods.go:61] "kube-scheduler-ha-577290" [09b6d338-2eb4-469c-ae21-a8e58b9c4622] Running
	I1115 09:33:47.990562  428896 system_pods.go:61] "kube-scheduler-ha-577290-m02" [7b3d6e56-319c-492f-8197-fb4c6c883fed] Running
	I1115 09:33:47.990570  428896 system_pods.go:61] "kube-scheduler-ha-577290-m03" [6d9b1eb9-2fa8-4bd5-b0a2-fa1b45c93b7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 09:33:47.990578  428896 system_pods.go:61] "kube-vip-ha-577290" [b451c58a-b25d-4697-b9c5-7e2fc03cea67] Running
	I1115 09:33:47.990584  428896 system_pods.go:61] "kube-vip-ha-577290-m02" [057ddd08-41fa-4738-a72c-a91a4e004fb1] Running
	I1115 09:33:47.990592  428896 system_pods.go:61] "kube-vip-ha-577290-m03" [7aaee1aa-2771-45e7-b0af-5c28f8c8a227] Running
	I1115 09:33:47.990597  428896 system_pods.go:61] "storage-provisioner" [c6bdc68a-8f6a-4b01-a166-66128641846b] Running
	I1115 09:33:47.990604  428896 system_pods.go:74] duration metric: took 6.940099ms to wait for pod list to return data ...
	I1115 09:33:47.990618  428896 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:33:47.993458  428896 default_sa.go:45] found service account: "default"
	I1115 09:33:47.993482  428896 default_sa.go:55] duration metric: took 2.857379ms for default service account to be created ...
	I1115 09:33:47.993492  428896 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:33:47.999362  428896 system_pods.go:86] 26 kube-system pods found
	I1115 09:33:47.999436  428896 system_pods.go:89] "coredns-66bc5c9577-hcps6" [61783521-de69-4669-874c-b0a260551902] Running
	I1115 09:33:47.999446  428896 system_pods.go:89] "coredns-66bc5c9577-xqpdq" [929b4b9a-8741-413f-939e-68c92781b1eb] Running
	I1115 09:33:47.999452  428896 system_pods.go:89] "etcd-ha-577290" [3ab153af-3774-4da4-a72e-323d14056944] Running
	I1115 09:33:47.999467  428896 system_pods.go:89] "etcd-ha-577290-m02" [146e26b0-996a-4cf6-a1ac-4e50fc799d1e] Running
	I1115 09:33:47.999481  428896 system_pods.go:89] "etcd-ha-577290-m03" [c61afa72-7aa1-42b1-9844-ae2295e52813] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 09:33:47.999488  428896 system_pods.go:89] "kindnet-7xtwk" [82d2cc3a-bb9c-4fdd-8975-8c804cc2c4d3] Running
	I1115 09:33:47.999498  428896 system_pods.go:89] "kindnet-dsj4t" [73dc267e-1872-43d0-97a0-6dfffe4327ab] Running
	I1115 09:33:47.999510  428896 system_pods.go:89] "kindnet-k8kmn" [350338b0-7cd1-4a6e-8608-b9b16b4a5cac] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 09:33:47.999520  428896 system_pods.go:89] "kindnet-ltfl5" [d3873196-930a-44bb-87f0-684c93025bdc] Running
	I1115 09:33:47.999527  428896 system_pods.go:89] "kube-apiserver-ha-577290" [a23f028c-3c3b-4b50-a859-2624a47cf37e] Running
	I1115 09:33:47.999536  428896 system_pods.go:89] "kube-apiserver-ha-577290-m02" [d6fb6ef6-4266-45e7-93c3-76c5ff31c0c5] Running
	I1115 09:33:47.999544  428896 system_pods.go:89] "kube-apiserver-ha-577290-m03" [23b73095-c581-4178-be4c-26dd08f8d4dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 09:33:47.999553  428896 system_pods.go:89] "kube-controller-manager-ha-577290" [f28c8e92-79ec-45ba-87a1-f07151431d5c] Running
	I1115 09:33:47.999561  428896 system_pods.go:89] "kube-controller-manager-ha-577290-m02" [8daa249c-7866-4ad3-bd2f-aa94ef222eb7] Running
	I1115 09:33:47.999573  428896 system_pods.go:89] "kube-controller-manager-ha-577290-m03" [53c1116a-ca9c-4f6a-a317-2159d25ae09c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 09:33:47.999586  428896 system_pods.go:89] "kube-proxy-4j6b5" [67899ff8-aa1a-41d8-b7a3-4fea91a10fa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 09:33:47.999594  428896 system_pods.go:89] "kube-proxy-6mkwq" [e2ddd593-d255-4f3d-b008-72b920167540] Running
	I1115 09:33:47.999602  428896 system_pods.go:89] "kube-proxy-k6gmr" [9f25b23c-212d-4987-9d75-335a513ad8c2] Running
	I1115 09:33:47.999608  428896 system_pods.go:89] "kube-proxy-zkk5v" [57c4c9d1-9a69-4190-a1cc-0036d422972c] Running
	I1115 09:33:47.999615  428896 system_pods.go:89] "kube-scheduler-ha-577290" [09b6d338-2eb4-469c-ae21-a8e58b9c4622] Running
	I1115 09:33:47.999623  428896 system_pods.go:89] "kube-scheduler-ha-577290-m02" [7b3d6e56-319c-492f-8197-fb4c6c883fed] Running
	I1115 09:33:47.999633  428896 system_pods.go:89] "kube-scheduler-ha-577290-m03" [6d9b1eb9-2fa8-4bd5-b0a2-fa1b45c93b7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 09:33:47.999642  428896 system_pods.go:89] "kube-vip-ha-577290" [b451c58a-b25d-4697-b9c5-7e2fc03cea67] Running
	I1115 09:33:47.999654  428896 system_pods.go:89] "kube-vip-ha-577290-m02" [057ddd08-41fa-4738-a72c-a91a4e004fb1] Running
	I1115 09:33:47.999660  428896 system_pods.go:89] "kube-vip-ha-577290-m03" [7aaee1aa-2771-45e7-b0af-5c28f8c8a227] Running
	I1115 09:33:47.999665  428896 system_pods.go:89] "storage-provisioner" [c6bdc68a-8f6a-4b01-a166-66128641846b] Running
	I1115 09:33:47.999676  428896 system_pods.go:126] duration metric: took 6.175615ms to wait for k8s-apps to be running ...
	I1115 09:33:47.999689  428896 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:33:47.999747  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:33:48.013321  428896 system_svc.go:56] duration metric: took 13.620486ms WaitForService to wait for kubelet
	I1115 09:33:48.013354  428896 kubeadm.go:587] duration metric: took 188.047542ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:33:48.013372  428896 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:33:48.017378  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:48.017414  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:48.017429  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:48.017435  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:48.017440  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:48.017446  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:48.017451  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:48.017456  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:48.017465  428896 node_conditions.go:105] duration metric: took 4.087504ms to run NodePressure ...
	I1115 09:33:48.017479  428896 start.go:242] waiting for startup goroutines ...
	I1115 09:33:48.017513  428896 start.go:256] writing updated cluster config ...
	I1115 09:33:48.019414  428896 out.go:203] 
	I1115 09:33:48.021095  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:48.021213  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:48.022801  428896 out.go:179] * Starting "ha-577290-m04" worker node in "ha-577290" cluster
	I1115 09:33:48.023813  428896 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:33:48.025033  428896 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:33:48.026034  428896 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:33:48.026051  428896 cache.go:65] Caching tarball of preloaded images
	I1115 09:33:48.026126  428896 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:33:48.026161  428896 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:33:48.026176  428896 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:33:48.026313  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:48.048674  428896 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:33:48.048695  428896 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:33:48.048712  428896 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:33:48.048737  428896 start.go:360] acquireMachinesLock for ha-577290-m04: {Name:mk727375190f43e7b9d23177818f3e0fe7e90632 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:33:48.048792  428896 start.go:364] duration metric: took 39.722µs to acquireMachinesLock for "ha-577290-m04"
	I1115 09:33:48.048810  428896 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:33:48.048817  428896 fix.go:54] fixHost starting: m04
	I1115 09:33:48.049018  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m04 --format={{.State.Status}}
	I1115 09:33:48.066458  428896 fix.go:112] recreateIfNeeded on ha-577290-m04: state=Stopped err=<nil>
	W1115 09:33:48.066487  428896 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:33:48.068426  428896 out.go:252] * Restarting existing docker container for "ha-577290-m04" ...
	I1115 09:33:48.068502  428896 cli_runner.go:164] Run: docker start ha-577290-m04
	I1115 09:33:48.374025  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m04 --format={{.State.Status}}
	I1115 09:33:48.394334  428896 kic.go:430] container "ha-577290-m04" state is running.
	I1115 09:33:48.394855  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m04
	I1115 09:33:48.414950  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:48.415224  428896 machine.go:94] provisionDockerMachine start ...
	I1115 09:33:48.415304  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:48.436207  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:48.436464  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1115 09:33:48.436478  428896 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:33:48.437107  428896 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46302->127.0.0.1:33199: read: connection reset by peer
	I1115 09:33:51.570007  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m04
	
	I1115 09:33:51.570038  428896 ubuntu.go:182] provisioning hostname "ha-577290-m04"
	I1115 09:33:51.570109  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:51.589648  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:51.589938  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1115 09:33:51.589956  428896 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-577290-m04 && echo "ha-577290-m04" | sudo tee /etc/hostname
	I1115 09:33:51.730555  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m04
	
	I1115 09:33:51.730652  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:51.749427  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:51.749732  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1115 09:33:51.749758  428896 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-577290-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-577290-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-577290-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:33:51.881659  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:33:51.881699  428896 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:33:51.881721  428896 ubuntu.go:190] setting up certificates
	I1115 09:33:51.881735  428896 provision.go:84] configureAuth start
	I1115 09:33:51.881795  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m04
	I1115 09:33:51.905477  428896 provision.go:143] copyHostCerts
	I1115 09:33:51.905520  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:33:51.905560  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:33:51.905565  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:33:51.905636  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:33:51.905713  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:33:51.905742  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:33:51.905749  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:33:51.905780  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:33:51.905850  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:33:51.905881  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:33:51.905887  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:33:51.905918  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:33:51.905994  428896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.ha-577290-m04 san=[127.0.0.1 192.168.49.5 ha-577290-m04 localhost minikube]
	I1115 09:33:52.709519  428896 provision.go:177] copyRemoteCerts
	I1115 09:33:52.709588  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:33:52.709639  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:52.729670  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:33:52.827014  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 09:33:52.827074  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:33:52.845307  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 09:33:52.845373  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 09:33:52.864228  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 09:33:52.864311  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:33:52.882736  428896 provision.go:87] duration metric: took 1.000983567s to configureAuth
	I1115 09:33:52.882768  428896 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:33:52.882985  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:52.883086  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:52.901749  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:52.901964  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1115 09:33:52.901980  428896 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:33:53.158344  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:33:53.158378  428896 machine.go:97] duration metric: took 4.74313086s to provisionDockerMachine
	I1115 09:33:53.158427  428896 start.go:293] postStartSetup for "ha-577290-m04" (driver="docker")
	I1115 09:33:53.158462  428896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:33:53.158540  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:33:53.158593  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:53.180692  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:33:53.278677  428896 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:33:53.282826  428896 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:33:53.282861  428896 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:33:53.282950  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:33:53.283052  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:33:53.283142  428896 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:33:53.283157  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /etc/ssl/certs/3590632.pem
	I1115 09:33:53.283256  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:33:53.292307  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:33:53.311030  428896 start.go:296] duration metric: took 152.582175ms for postStartSetup
	I1115 09:33:53.311119  428896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:33:53.311155  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:53.330486  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:33:53.423358  428896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:33:53.428267  428896 fix.go:56] duration metric: took 5.379444169s for fixHost
	I1115 09:33:53.428291  428896 start.go:83] releasing machines lock for "ha-577290-m04", held for 5.379488718s
	I1115 09:33:53.428356  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m04
	I1115 09:33:53.450722  428896 out.go:179] * Found network options:
	I1115 09:33:53.452273  428896 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1115 09:33:53.453579  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:53.453607  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:53.453616  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:53.453643  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:53.453660  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:53.453674  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 09:33:53.453759  428896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:33:53.453807  428896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:33:53.453873  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:53.453813  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:53.472760  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:33:53.473149  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:33:53.627249  428896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:33:53.632573  428896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:33:53.632637  428896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:33:53.642178  428896 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 09:33:53.642206  428896 start.go:496] detecting cgroup driver to use...
	I1115 09:33:53.642240  428896 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:33:53.642300  428896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:33:53.657825  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:33:53.671742  428896 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:33:53.671815  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:33:53.687976  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:33:53.701149  428896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:33:53.785060  428896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:33:53.872517  428896 docker.go:234] disabling docker service ...
	I1115 09:33:53.872587  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:33:53.888847  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:33:53.902669  428896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:33:53.985655  428896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:33:54.076443  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:33:54.089637  428896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:33:54.104342  428896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:33:54.104514  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.113954  428896 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:33:54.114031  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.123713  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.133355  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.144683  428896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:33:54.153702  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.163284  428896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.172255  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.181589  428896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:33:54.189668  428896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:33:54.197336  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:54.288186  428896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:33:54.403383  428896 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:33:54.403492  428896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:33:54.407772  428896 start.go:564] Will wait 60s for crictl version
	I1115 09:33:54.407839  428896 ssh_runner.go:195] Run: which crictl
	I1115 09:33:54.411798  428896 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:33:54.438501  428896 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:33:54.438607  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:54.468561  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:54.499645  428896 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:33:54.501099  428896 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 09:33:54.502317  428896 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1115 09:33:54.503727  428896 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1115 09:33:54.505140  428896 cli_runner.go:164] Run: docker network inspect ha-577290 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:33:54.524109  428896 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:33:54.528569  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:54.539044  428896 mustload.go:66] Loading cluster: ha-577290
	I1115 09:33:54.539261  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:54.539487  428896 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:33:54.557777  428896 host.go:66] Checking if "ha-577290" exists ...
	I1115 09:33:54.558052  428896 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290 for IP: 192.168.49.5
	I1115 09:33:54.558069  428896 certs.go:195] generating shared ca certs ...
	I1115 09:33:54.558091  428896 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:54.558225  428896 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:33:54.558262  428896 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:33:54.558276  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 09:33:54.558292  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 09:33:54.558306  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 09:33:54.558319  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 09:33:54.558371  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:33:54.558419  428896 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:33:54.558431  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:33:54.558454  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:33:54.558475  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:33:54.558502  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:33:54.558543  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:33:54.558573  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem -> /usr/share/ca-certificates/359063.pem
	I1115 09:33:54.558586  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /usr/share/ca-certificates/3590632.pem
	I1115 09:33:54.558599  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:54.558619  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:33:54.581222  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:33:54.600809  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:33:54.619688  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:33:54.637947  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:33:54.657828  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:33:54.680584  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:33:54.710166  428896 ssh_runner.go:195] Run: openssl version
	I1115 09:33:54.717263  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:33:54.727158  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:54.731833  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:54.731883  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:54.768964  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:33:54.777707  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:33:54.787101  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:33:54.791155  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:33:54.791218  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:33:54.826198  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:33:54.835154  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:33:54.845054  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:33:54.849628  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:33:54.849691  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:33:54.888273  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:33:54.897198  428896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:33:54.901079  428896 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:33:54.901140  428896 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1115 09:33:54.901265  428896 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-577290-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:33:54.901334  428896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:33:54.910356  428896 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:33:54.910503  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1115 09:33:54.919713  428896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:33:54.934154  428896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:33:54.948279  428896 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 09:33:54.952666  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:54.964534  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:55.052727  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:55.067727  428896 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1115 09:33:55.068040  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:55.070111  428896 out.go:179] * Verifying Kubernetes components...
	I1115 09:33:55.071556  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:55.163626  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:55.178038  428896 kapi.go:59] client config for ha-577290: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 09:33:55.178107  428896 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 09:33:55.178364  428896 node_ready.go:35] waiting up to 6m0s for node "ha-577290-m04" to be "Ready" ...
	W1115 09:33:57.182074  428896 node_ready.go:57] node "ha-577290-m04" has "Ready":"Unknown" status (will retry)
	W1115 09:33:59.682695  428896 node_ready.go:57] node "ha-577290-m04" has "Ready":"Unknown" status (will retry)
	I1115 09:34:01.682637  428896 node_ready.go:49] node "ha-577290-m04" is "Ready"
	I1115 09:34:01.682668  428896 node_ready.go:38] duration metric: took 6.504287602s for node "ha-577290-m04" to be "Ready" ...
	I1115 09:34:01.682681  428896 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:34:01.682732  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:34:01.696758  428896 system_svc.go:56] duration metric: took 14.066869ms WaitForService to wait for kubelet
	I1115 09:34:01.696792  428896 kubeadm.go:587] duration metric: took 6.629025488s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:34:01.696815  428896 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:34:01.700561  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:34:01.700588  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:34:01.700599  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:34:01.700603  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:34:01.700606  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:34:01.700609  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:34:01.700612  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:34:01.700615  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:34:01.700619  428896 node_conditions.go:105] duration metric: took 3.798933ms to run NodePressure ...
	I1115 09:34:01.700630  428896 start.go:242] waiting for startup goroutines ...
	I1115 09:34:01.700652  428896 start.go:256] writing updated cluster config ...
	I1115 09:34:01.700940  428896 ssh_runner.go:195] Run: rm -f paused
	I1115 09:34:01.705190  428896 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:34:01.705690  428896 kapi.go:59] client config for ha-577290: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 09:34:01.714720  428896 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hcps6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.720476  428896 pod_ready.go:94] pod "coredns-66bc5c9577-hcps6" is "Ready"
	I1115 09:34:01.720506  428896 pod_ready.go:86] duration metric: took 5.756993ms for pod "coredns-66bc5c9577-hcps6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.720518  428896 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xqpdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.725758  428896 pod_ready.go:94] pod "coredns-66bc5c9577-xqpdq" is "Ready"
	I1115 09:34:01.725790  428896 pod_ready.go:86] duration metric: took 5.264346ms for pod "coredns-66bc5c9577-xqpdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.728618  428896 pod_ready.go:83] waiting for pod "etcd-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.733682  428896 pod_ready.go:94] pod "etcd-ha-577290" is "Ready"
	I1115 09:34:01.733713  428896 pod_ready.go:86] duration metric: took 5.068711ms for pod "etcd-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.733724  428896 pod_ready.go:83] waiting for pod "etcd-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.738674  428896 pod_ready.go:94] pod "etcd-ha-577290-m02" is "Ready"
	I1115 09:34:01.738702  428896 pod_ready.go:86] duration metric: took 4.96923ms for pod "etcd-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.738711  428896 pod_ready.go:83] waiting for pod "etcd-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.907175  428896 request.go:683] "Waited before sending request" delay="168.345879ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-577290-m03"
	I1115 09:34:02.106204  428896 request.go:683] "Waited before sending request" delay="195.32057ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m03"
	I1115 09:34:02.109590  428896 pod_ready.go:94] pod "etcd-ha-577290-m03" is "Ready"
	I1115 09:34:02.109621  428896 pod_ready.go:86] duration metric: took 370.905099ms for pod "etcd-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:02.307120  428896 request.go:683] "Waited before sending request" delay="197.367777ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1115 09:34:02.311497  428896 pod_ready.go:83] waiting for pod "kube-apiserver-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:02.506963  428896 request.go:683] "Waited before sending request" delay="195.356346ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-577290"
	I1115 09:34:02.706771  428896 request.go:683] "Waited before sending request" delay="196.448308ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290"
	I1115 09:34:02.710109  428896 pod_ready.go:94] pod "kube-apiserver-ha-577290" is "Ready"
	I1115 09:34:02.710139  428896 pod_ready.go:86] duration metric: took 398.612345ms for pod "kube-apiserver-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:02.710148  428896 pod_ready.go:83] waiting for pod "kube-apiserver-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:02.906594  428896 request.go:683] "Waited before sending request" delay="196.34557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-577290-m02"
	I1115 09:34:03.106336  428896 request.go:683] "Waited before sending request" delay="196.305201ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	I1115 09:34:03.109900  428896 pod_ready.go:94] pod "kube-apiserver-ha-577290-m02" is "Ready"
	I1115 09:34:03.109935  428896 pod_ready.go:86] duration metric: took 399.77994ms for pod "kube-apiserver-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:03.109947  428896 pod_ready.go:83] waiting for pod "kube-apiserver-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:03.306248  428896 request.go:683] "Waited before sending request" delay="196.205945ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-577290-m03"
	I1115 09:34:03.507032  428896 request.go:683] "Waited before sending request" delay="197.392595ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m03"
	I1115 09:34:03.509957  428896 pod_ready.go:94] pod "kube-apiserver-ha-577290-m03" is "Ready"
	I1115 09:34:03.509989  428896 pod_ready.go:86] duration metric: took 400.035581ms for pod "kube-apiserver-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:03.706553  428896 request.go:683] "Waited before sending request" delay="196.41245ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1115 09:34:03.710543  428896 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:03.907045  428896 request.go:683] "Waited before sending request" delay="196.330959ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-577290"
	I1115 09:34:04.106816  428896 request.go:683] "Waited before sending request" delay="196.427767ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290"
	I1115 09:34:04.110328  428896 pod_ready.go:94] pod "kube-controller-manager-ha-577290" is "Ready"
	I1115 09:34:04.110357  428896 pod_ready.go:86] duration metric: took 399.786401ms for pod "kube-controller-manager-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:04.110368  428896 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:04.306851  428896 request.go:683] "Waited before sending request" delay="196.351238ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-577290-m02"
	I1115 09:34:04.506506  428896 request.go:683] "Waited before sending request" delay="196.393036ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	I1115 09:34:04.509995  428896 pod_ready.go:94] pod "kube-controller-manager-ha-577290-m02" is "Ready"
	I1115 09:34:04.510025  428896 pod_ready.go:86] duration metric: took 399.650133ms for pod "kube-controller-manager-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:04.510034  428896 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:04.706646  428896 request.go:683] "Waited before sending request" delay="196.418062ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-577290-m03"
	I1115 09:34:04.906837  428896 request.go:683] "Waited before sending request" delay="196.369246ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m03"
	I1115 09:34:04.909799  428896 pod_ready.go:94] pod "kube-controller-manager-ha-577290-m03" is "Ready"
	I1115 09:34:04.909834  428896 pod_ready.go:86] duration metric: took 399.79293ms for pod "kube-controller-manager-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:05.106269  428896 request.go:683] "Waited before sending request" delay="196.284181ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1115 09:34:05.110078  428896 pod_ready.go:83] waiting for pod "kube-proxy-4j6b5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:05.306484  428896 request.go:683] "Waited before sending request" delay="196.226116ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4j6b5"
	I1115 09:34:05.506233  428896 request.go:683] "Waited before sending request" delay="196.286404ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	I1115 09:34:05.706640  428896 request.go:683] "Waited before sending request" delay="96.270262ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4j6b5"
	I1115 09:34:05.906700  428896 request.go:683] "Waited before sending request" delay="196.368708ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	I1115 09:34:06.306548  428896 request.go:683] "Waited before sending request" delay="192.368837ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	I1115 09:34:06.707117  428896 request.go:683] "Waited before sending request" delay="93.270622ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	W1115 09:34:07.116563  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:09.617314  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:12.116956  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:14.616273  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:17.116371  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:19.116501  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:21.116689  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:23.116818  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:25.617234  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:28.117036  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:30.617226  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:33.116469  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:35.616777  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:37.617262  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:40.117449  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:42.117831  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:44.616287  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:46.618306  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:49.116723  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:51.616229  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:53.617820  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:56.116943  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:58.616333  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:00.616873  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:02.617011  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:05.117447  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:07.616106  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:09.616804  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:12.124337  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:14.616125  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:16.617016  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:19.118269  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:21.616189  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:23.617124  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:26.116836  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:28.117058  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:30.117374  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:32.618970  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:35.116227  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:37.117008  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:39.616965  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:42.116851  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:44.618213  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:47.116222  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:49.616933  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:52.116850  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:54.616756  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:57.116793  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:59.616644  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:02.116080  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:04.116718  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:06.618437  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:09.116036  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:11.116546  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:13.616999  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:16.117083  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:18.616365  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:20.616664  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:22.617250  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:25.116824  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:27.116961  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:29.616385  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:32.116865  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:34.616343  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:36.616981  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:39.117055  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:41.616357  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:43.616462  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:45.616976  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:48.117111  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:50.616999  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:53.115913  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:55.116281  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:57.616365  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:59.616778  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:02.116803  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:04.615843  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:06.616292  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:08.617646  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:11.116723  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:13.116830  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:15.616517  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:18.116690  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:20.616314  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:23.116309  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:25.116508  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:27.117035  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:29.617437  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:32.116146  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:34.116964  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:36.616844  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:39.115867  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:41.116493  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:43.616383  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:45.617047  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:48.116809  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:50.617022  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:53.116939  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:55.615892  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:57.616280  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:38:00.116339  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	I1115 09:38:01.705542  428896 pod_ready.go:86] duration metric: took 3m56.595425039s for pod "kube-proxy-4j6b5" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 09:38:01.705579  428896 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1115 09:38:01.705595  428896 pod_ready.go:40] duration metric: took 4m0.000371267s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:38:01.707088  428896 out.go:203] 
	W1115 09:38:01.708237  428896 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1115 09:38:01.709353  428896 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-577290 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-577290
helpers_test.go:243: (dbg) docker inspect ha-577290:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "55fd204192d284fbef9f2da2e9045f3bab36074714add4280e505121ea7188e1",
	        "Created": "2025-11-15T09:26:44.261814815Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 429099,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:31:45.502200821Z",
	            "FinishedAt": "2025-11-15T09:31:44.848068466Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/55fd204192d284fbef9f2da2e9045f3bab36074714add4280e505121ea7188e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/55fd204192d284fbef9f2da2e9045f3bab36074714add4280e505121ea7188e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/55fd204192d284fbef9f2da2e9045f3bab36074714add4280e505121ea7188e1/hosts",
	        "LogPath": "/var/lib/docker/containers/55fd204192d284fbef9f2da2e9045f3bab36074714add4280e505121ea7188e1/55fd204192d284fbef9f2da2e9045f3bab36074714add4280e505121ea7188e1-json.log",
	        "Name": "/ha-577290",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-577290:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-577290",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "55fd204192d284fbef9f2da2e9045f3bab36074714add4280e505121ea7188e1",
	                "LowerDir": "/var/lib/docker/overlay2/deaa5ca0a1e34d573faceacf362b7382f9b20153a1a4f4b48a2d020c0b752fe7-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/deaa5ca0a1e34d573faceacf362b7382f9b20153a1a4f4b48a2d020c0b752fe7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/deaa5ca0a1e34d573faceacf362b7382f9b20153a1a4f4b48a2d020c0b752fe7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/deaa5ca0a1e34d573faceacf362b7382f9b20153a1a4f4b48a2d020c0b752fe7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-577290",
	                "Source": "/var/lib/docker/volumes/ha-577290/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-577290",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-577290",
	                "name.minikube.sigs.k8s.io": "ha-577290",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "acc491fab32d2cd65172330feb24af61e80c585358abfd8158cdefa06e7c42ee",
	            "SandboxKey": "/var/run/docker/netns/acc491fab32d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ]
	            },
	            "Networks": {
	                "ha-577290": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a8fb985664d5790039e66f3c687f2a82ee3c69ad2fee979f63d3b79d803a991",
	                    "EndpointID": "3837089187f6cc16fd8cb01329916fb6aadb5ac9bc7b469563f35a001ef3675a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "0e:36:12:84:b4:30",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-577290",
	                        "55fd204192d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-577290 -n ha-577290
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-577290 logs -n 25: (1.116531458s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-577290 cp ha-577290-m03:/home/docker/cp-test.txt ha-577290-m02:/home/docker/cp-test_ha-577290-m03_ha-577290-m02.txt              │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m02 sudo cat /home/docker/cp-test_ha-577290-m03_ha-577290-m02.txt                                        │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ cp      │ ha-577290 cp ha-577290-m03:/home/docker/cp-test.txt ha-577290-m04:/home/docker/cp-test_ha-577290-m03_ha-577290-m04.txt              │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m04 sudo cat /home/docker/cp-test_ha-577290-m03_ha-577290-m04.txt                                        │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ cp      │ ha-577290 cp testdata/cp-test.txt ha-577290-m04:/home/docker/cp-test.txt                                                            │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ cp      │ ha-577290 cp ha-577290-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile512031102/001/cp-test_ha-577290-m04.txt │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ cp      │ ha-577290 cp ha-577290-m04:/home/docker/cp-test.txt ha-577290:/home/docker/cp-test_ha-577290-m04_ha-577290.txt                      │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290 sudo cat /home/docker/cp-test_ha-577290-m04_ha-577290.txt                                                │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ cp      │ ha-577290 cp ha-577290-m04:/home/docker/cp-test.txt ha-577290-m02:/home/docker/cp-test_ha-577290-m04_ha-577290-m02.txt              │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m02 sudo cat /home/docker/cp-test_ha-577290-m04_ha-577290-m02.txt                                        │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ cp      │ ha-577290 cp ha-577290-m04:/home/docker/cp-test.txt ha-577290-m03:/home/docker/cp-test_ha-577290-m04_ha-577290-m03.txt              │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m03 sudo cat /home/docker/cp-test_ha-577290-m04_ha-577290-m03.txt                                        │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ node    │ ha-577290 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ node    │ ha-577290 node start m02 --alsologtostderr -v 5                                                                                     │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ node    │ ha-577290 node list --alsologtostderr -v 5                                                                                          │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │                     │
	│ stop    │ ha-577290 stop --alsologtostderr -v 5                                                                                               │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:31 UTC │
	│ start   │ ha-577290 start --wait true --alsologtostderr -v 5                                                                                  │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:31 UTC │                     │
	│ node    │ ha-577290 node list --alsologtostderr -v 5                                                                                          │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:31:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:31:45.266575  428896 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:31:45.266886  428896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:31:45.266898  428896 out.go:374] Setting ErrFile to fd 2...
	I1115 09:31:45.266902  428896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:31:45.267163  428896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:31:45.267737  428896 out.go:368] Setting JSON to false
	I1115 09:31:45.268710  428896 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4446,"bootTime":1763194659,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:31:45.268819  428896 start.go:143] virtualization: kvm guest
	I1115 09:31:45.270819  428896 out.go:179] * [ha-577290] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:31:45.272427  428896 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:31:45.272431  428896 notify.go:221] Checking for updates...
	I1115 09:31:45.274773  428896 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:31:45.276134  428896 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:31:45.277406  428896 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 09:31:45.278544  428896 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:31:45.280004  428896 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:31:45.281655  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:31:45.281802  428896 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:31:45.305468  428896 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:31:45.305577  428896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:31:45.363884  428896 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-15 09:31:45.353980004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:31:45.363994  428896 docker.go:319] overlay module found
	I1115 09:31:45.366036  428896 out.go:179] * Using the docker driver based on existing profile
	I1115 09:31:45.367327  428896 start.go:309] selected driver: docker
	I1115 09:31:45.367347  428896 start.go:930] validating driver "docker" against &{Name:ha-577290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:31:45.367524  428896 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:31:45.367608  428896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:31:45.426878  428896 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-15 09:31:45.417064116 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:31:45.427845  428896 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:31:45.427892  428896 cni.go:84] Creating CNI manager for ""
	I1115 09:31:45.427961  428896 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1115 09:31:45.428020  428896 start.go:353] cluster config:
	{Name:ha-577290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:31:45.429910  428896 out.go:179] * Starting "ha-577290" primary control-plane node in "ha-577290" cluster
	I1115 09:31:45.431277  428896 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:31:45.432779  428896 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:31:45.434027  428896 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:31:45.434081  428896 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 09:31:45.434108  428896 cache.go:65] Caching tarball of preloaded images
	I1115 09:31:45.434157  428896 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:31:45.434217  428896 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:31:45.434231  428896 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:31:45.434406  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:31:45.454978  428896 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:31:45.455002  428896 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:31:45.455026  428896 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:31:45.455057  428896 start.go:360] acquireMachinesLock for ha-577290: {Name:mk6172d84dd1d32a54848cf1d049455806d86fc7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:31:45.455126  428896 start.go:364] duration metric: took 46.262µs to acquireMachinesLock for "ha-577290"
	I1115 09:31:45.455149  428896 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:31:45.455159  428896 fix.go:54] fixHost starting: 
	I1115 09:31:45.455379  428896 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:31:45.473405  428896 fix.go:112] recreateIfNeeded on ha-577290: state=Stopped err=<nil>
	W1115 09:31:45.473441  428896 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:31:45.475321  428896 out.go:252] * Restarting existing docker container for "ha-577290" ...
	I1115 09:31:45.475413  428896 cli_runner.go:164] Run: docker start ha-577290
	I1115 09:31:45.734297  428896 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:31:45.753588  428896 kic.go:430] container "ha-577290" state is running.
	I1115 09:31:45.753944  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290
	I1115 09:31:45.772816  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:31:45.773098  428896 machine.go:94] provisionDockerMachine start ...
	I1115 09:31:45.773176  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:45.793693  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:45.793956  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1115 09:31:45.793974  428896 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:31:45.794782  428896 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46998->127.0.0.1:33184: read: connection reset by peer
	I1115 09:31:48.924615  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290
	
	I1115 09:31:48.924669  428896 ubuntu.go:182] provisioning hostname "ha-577290"
	I1115 09:31:48.924735  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:48.943068  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:48.943339  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1115 09:31:48.943354  428896 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-577290 && echo "ha-577290" | sudo tee /etc/hostname
	I1115 09:31:49.082618  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290
	
	I1115 09:31:49.082703  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:49.100574  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:49.100818  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1115 09:31:49.100842  428896 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-577290' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-577290/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-577290' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:31:49.230624  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:31:49.230659  428896 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:31:49.230707  428896 ubuntu.go:190] setting up certificates
	I1115 09:31:49.230722  428896 provision.go:84] configureAuth start
	I1115 09:31:49.230803  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290
	I1115 09:31:49.249474  428896 provision.go:143] copyHostCerts
	I1115 09:31:49.249521  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:31:49.249578  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:31:49.249598  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:31:49.249677  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:31:49.249798  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:31:49.249825  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:31:49.249835  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:31:49.249880  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:31:49.250060  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:31:49.250160  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:31:49.250181  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:31:49.250240  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:31:49.250337  428896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.ha-577290 san=[127.0.0.1 192.168.49.2 ha-577290 localhost minikube]
	I1115 09:31:49.553270  428896 provision.go:177] copyRemoteCerts
	I1115 09:31:49.553355  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:31:49.553408  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:49.571907  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:31:49.667671  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 09:31:49.667749  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:31:49.687153  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 09:31:49.687230  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1115 09:31:49.705517  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 09:31:49.705588  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 09:31:49.723853  428896 provision.go:87] duration metric: took 493.11187ms to configureAuth
	I1115 09:31:49.723888  428896 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:31:49.724092  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:31:49.724201  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:49.742818  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:49.743043  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1115 09:31:49.743057  428896 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:31:50.033292  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:31:50.033324  428896 machine.go:97] duration metric: took 4.26020713s to provisionDockerMachine
	I1115 09:31:50.033341  428896 start.go:293] postStartSetup for "ha-577290" (driver="docker")
	I1115 09:31:50.033354  428896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:31:50.033471  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:31:50.033538  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:50.054075  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:31:50.149459  428896 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:31:50.153204  428896 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:31:50.153244  428896 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:31:50.153258  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:31:50.153313  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:31:50.153436  428896 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:31:50.153459  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /etc/ssl/certs/3590632.pem
	I1115 09:31:50.153592  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:31:50.161899  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:31:50.180230  428896 start.go:296] duration metric: took 146.870031ms for postStartSetup
	I1115 09:31:50.180319  428896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:31:50.180381  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:50.199337  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:31:50.290830  428896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:31:50.295656  428896 fix.go:56] duration metric: took 4.840490237s for fixHost
	I1115 09:31:50.295688  428896 start.go:83] releasing machines lock for "ha-577290", held for 4.840547311s
	I1115 09:31:50.295776  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290
	I1115 09:31:50.314561  428896 ssh_runner.go:195] Run: cat /version.json
	I1115 09:31:50.314634  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:50.314640  428896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:31:50.314706  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:50.333494  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:31:50.333615  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:31:50.480680  428896 ssh_runner.go:195] Run: systemctl --version
	I1115 09:31:50.487312  428896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:31:50.522567  428896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:31:50.527574  428896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:31:50.527668  428896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:31:50.536442  428896 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 09:31:50.536471  428896 start.go:496] detecting cgroup driver to use...
	I1115 09:31:50.536510  428896 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:31:50.536562  428896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:31:50.552643  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:31:50.565682  428896 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:31:50.565732  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:31:50.579797  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:31:50.592607  428896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:31:50.674494  428896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:31:50.753757  428896 docker.go:234] disabling docker service ...
	I1115 09:31:50.753838  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:31:50.768880  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:31:50.781446  428896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:31:50.862035  428896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:31:50.941863  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:31:50.955003  428896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:31:50.969531  428896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:31:50.969630  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:50.978678  428896 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:31:50.978767  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:50.987922  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:50.997554  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:51.006963  428896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:31:51.015699  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:51.024835  428896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:51.033468  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:51.042627  428896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:31:51.050076  428896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:31:51.057319  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:31:51.138979  428896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:31:51.250267  428896 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:31:51.250325  428896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:31:51.254431  428896 start.go:564] Will wait 60s for crictl version
	I1115 09:31:51.254482  428896 ssh_runner.go:195] Run: which crictl
	I1115 09:31:51.258072  428896 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:31:51.283265  428896 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:31:51.283331  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:31:51.311792  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:31:51.341627  428896 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:31:51.342956  428896 cli_runner.go:164] Run: docker network inspect ha-577290 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:31:51.361359  428896 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:31:51.365628  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:31:51.376129  428896 kubeadm.go:884] updating cluster {Name:ha-577290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:31:51.376278  428896 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:31:51.376328  428896 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:31:51.411138  428896 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:31:51.411158  428896 crio.go:433] Images already preloaded, skipping extraction
	I1115 09:31:51.411201  428896 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:31:51.438061  428896 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:31:51.438086  428896 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:31:51.438095  428896 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 09:31:51.438206  428896 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-577290 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:31:51.438283  428896 ssh_runner.go:195] Run: crio config
	I1115 09:31:51.486595  428896 cni.go:84] Creating CNI manager for ""
	I1115 09:31:51.486621  428896 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1115 09:31:51.486644  428896 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:31:51.486670  428896 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-577290 NodeName:ha-577290 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:31:51.486829  428896 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-577290"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:31:51.486855  428896 kube-vip.go:115] generating kube-vip config ...
	I1115 09:31:51.486908  428896 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 09:31:51.499329  428896 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:31:51.499466  428896 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 09:31:51.499536  428896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:31:51.507665  428896 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:31:51.507743  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1115 09:31:51.516035  428896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1115 09:31:51.528543  428896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:31:51.540903  428896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1115 09:31:51.553425  428896 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 09:31:51.566186  428896 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 09:31:51.569903  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:31:51.579760  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:31:51.657522  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:31:51.682929  428896 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290 for IP: 192.168.49.2
	I1115 09:31:51.682962  428896 certs.go:195] generating shared ca certs ...
	I1115 09:31:51.682984  428896 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:51.683252  428896 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:31:51.683303  428896 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:31:51.683316  428896 certs.go:257] generating profile certs ...
	I1115 09:31:51.683414  428896 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key
	I1115 09:31:51.683438  428896 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.7b879ecd
	I1115 09:31:51.683459  428896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt.7b879ecd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1115 09:31:51.902645  428896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt.7b879ecd ...
	I1115 09:31:51.902677  428896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt.7b879ecd: {Name:mk31504058a71e0f7602a819b395f2dc874b4f06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:51.902882  428896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.7b879ecd ...
	I1115 09:31:51.902903  428896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.7b879ecd: {Name:mk62d65624b9927bec45ce4edc59d90214e67d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:51.903010  428896 certs.go:382] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt.7b879ecd -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt
	I1115 09:31:51.903152  428896 certs.go:386] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.7b879ecd -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key
	I1115 09:31:51.903287  428896 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key
	I1115 09:31:51.903304  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 09:31:51.903316  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 09:31:51.903328  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 09:31:51.903338  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 09:31:51.903350  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 09:31:51.903360  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 09:31:51.903371  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 09:31:51.903381  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 09:31:51.903453  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:31:51.903493  428896 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:31:51.903503  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:31:51.903523  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:31:51.903545  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:31:51.903572  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:31:51.903616  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:31:51.903642  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:31:51.903656  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem -> /usr/share/ca-certificates/359063.pem
	I1115 09:31:51.903668  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /usr/share/ca-certificates/3590632.pem
	I1115 09:31:51.904202  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:31:51.923549  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:31:51.941100  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:31:51.959534  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:31:51.977478  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 09:31:51.995833  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:31:52.013950  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:31:52.032035  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 09:31:52.049984  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:31:52.068640  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:31:52.087500  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:31:52.105266  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:31:52.118376  428896 ssh_runner.go:195] Run: openssl version
	I1115 09:31:52.124566  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:31:52.133079  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:31:52.137009  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:31:52.137067  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:31:52.171540  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:31:52.180359  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:31:52.191734  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:31:52.197586  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:31:52.197656  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:31:52.238367  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:31:52.248045  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:31:52.257259  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:31:52.262431  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:31:52.262498  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:31:52.310780  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:31:52.321838  428896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:31:52.327131  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 09:31:52.384824  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 09:31:52.420556  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 09:31:52.456174  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 09:31:52.492992  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 09:31:52.527605  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 09:31:52.563847  428896 kubeadm.go:401] StartCluster: {Name:ha-577290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:31:52.564002  428896 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:31:52.564061  428896 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:31:52.598315  428896 cri.go:89] found id: "f33da4a57e7abac3ebb4c2bb796754d89a55d77cae917a4638e1dc7bb54b55b9"
	I1115 09:31:52.598342  428896 cri.go:89] found id: "6a62ffd50e27a5d8290e1041b339ee1c4011f892ee0b67e96eca3abce2936268"
	I1115 09:31:52.598346  428896 cri.go:89] found id: "98b9fc9a33f0b40586e635c881668594f59cdd960b26204a457a95a2020bd154"
	I1115 09:31:52.598352  428896 cri.go:89] found id: "bf31a867595678c370bce5d49663eec7f39f09c0ffba1367b034ab02c073ea71"
	I1115 09:31:52.598356  428896 cri.go:89] found id: "aa99d93bfb4888fbc03108f08590c503f95f20e1969eabb19d4a76ea1be94d6f"
	I1115 09:31:52.598361  428896 cri.go:89] found id: ""
	I1115 09:31:52.598433  428896 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 09:31:52.610898  428896 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:31:52Z" level=error msg="open /run/runc: no such file or directory"
	I1115 09:31:52.610984  428896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:31:52.619008  428896 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 09:31:52.619032  428896 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 09:31:52.619095  428896 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 09:31:52.626928  428896 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:31:52.627429  428896 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-577290" does not appear in /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:31:52.627702  428896 kubeconfig.go:62] /home/jenkins/minikube-integration/21895-355485/kubeconfig needs updating (will repair): [kubeconfig missing "ha-577290" cluster setting kubeconfig missing "ha-577290" context setting]
	I1115 09:31:52.628120  428896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:52.628857  428896 kapi.go:59] client config for ha-577290: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 09:31:52.629429  428896 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 09:31:52.629443  428896 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1115 09:31:52.629457  428896 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 09:31:52.629464  428896 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 09:31:52.629469  428896 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 09:31:52.629474  428896 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 09:31:52.629935  428896 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 09:31:52.638596  428896 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1115 09:31:52.638622  428896 kubeadm.go:602] duration metric: took 19.583961ms to restartPrimaryControlPlane
	I1115 09:31:52.638632  428896 kubeadm.go:403] duration metric: took 74.798878ms to StartCluster
	I1115 09:31:52.638659  428896 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:52.638739  428896 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:31:52.639509  428896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:52.639770  428896 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:31:52.639796  428896 start.go:242] waiting for startup goroutines ...
	I1115 09:31:52.639817  428896 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 09:31:52.640075  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:31:52.642696  428896 out.go:179] * Enabled addons: 
	I1115 09:31:52.643939  428896 addons.go:515] duration metric: took 4.127185ms for enable addons: enabled=[]
	I1115 09:31:52.643981  428896 start.go:247] waiting for cluster config update ...
	I1115 09:31:52.643992  428896 start.go:256] writing updated cluster config ...
	I1115 09:31:52.645418  428896 out.go:203] 
	I1115 09:31:52.646875  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:31:52.646991  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:31:52.648625  428896 out.go:179] * Starting "ha-577290-m02" control-plane node in "ha-577290" cluster
	I1115 09:31:52.649693  428896 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:31:52.651012  428896 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:31:52.652316  428896 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:31:52.652334  428896 cache.go:65] Caching tarball of preloaded images
	I1115 09:31:52.652420  428896 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:31:52.652479  428896 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:31:52.652496  428896 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:31:52.652639  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:31:52.677157  428896 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:31:52.677183  428896 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:31:52.677206  428896 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:31:52.677237  428896 start.go:360] acquireMachinesLock for ha-577290-m02: {Name:mkf112ea76ada558a569f224e46caac6b694e64c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:31:52.677308  428896 start.go:364] duration metric: took 49.241µs to acquireMachinesLock for "ha-577290-m02"
	I1115 09:31:52.677330  428896 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:31:52.677340  428896 fix.go:54] fixHost starting: m02
	I1115 09:31:52.677664  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m02 --format={{.State.Status}}
	I1115 09:31:52.698576  428896 fix.go:112] recreateIfNeeded on ha-577290-m02: state=Stopped err=<nil>
	W1115 09:31:52.698609  428896 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:31:52.700325  428896 out.go:252] * Restarting existing docker container for "ha-577290-m02" ...
	I1115 09:31:52.700427  428896 cli_runner.go:164] Run: docker start ha-577290-m02
	I1115 09:31:53.006147  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m02 --format={{.State.Status}}
	I1115 09:31:53.028889  428896 kic.go:430] container "ha-577290-m02" state is running.
	I1115 09:31:53.029347  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m02
	I1115 09:31:53.051018  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:31:53.051301  428896 machine.go:94] provisionDockerMachine start ...
	I1115 09:31:53.051366  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:53.074164  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:53.074499  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1115 09:31:53.074516  428896 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:31:53.075211  428896 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57138->127.0.0.1:33189: read: connection reset by peer
	I1115 09:31:56.207665  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m02
	
	I1115 09:31:56.207697  428896 ubuntu.go:182] provisioning hostname "ha-577290-m02"
	I1115 09:31:56.207780  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:56.232566  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:56.232897  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1115 09:31:56.232924  428896 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-577290-m02 && echo "ha-577290-m02" | sudo tee /etc/hostname
	I1115 09:31:56.391849  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m02
	
	I1115 09:31:56.391935  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:56.414665  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:56.414967  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1115 09:31:56.414995  428896 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-577290-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-577290-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-577290-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:31:56.561504  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:31:56.561540  428896 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:31:56.561563  428896 ubuntu.go:190] setting up certificates
	I1115 09:31:56.561579  428896 provision.go:84] configureAuth start
	I1115 09:31:56.561651  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m02
	I1115 09:31:56.584955  428896 provision.go:143] copyHostCerts
	I1115 09:31:56.584995  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:31:56.585033  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:31:56.585051  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:31:56.585145  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:31:56.585258  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:31:56.585290  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:31:56.585298  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:31:56.585343  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:31:56.585423  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:31:56.585444  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:31:56.585450  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:31:56.585488  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:31:56.585575  428896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.ha-577290-m02 san=[127.0.0.1 192.168.49.3 ha-577290-m02 localhost minikube]
	I1115 09:31:56.824747  428896 provision.go:177] copyRemoteCerts
	I1115 09:31:56.824826  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:31:56.824877  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:56.850475  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m02/id_rsa Username:docker}
	I1115 09:31:56.951132  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 09:31:56.951210  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:31:56.977882  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 09:31:56.977954  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:31:56.997077  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 09:31:56.997147  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1115 09:31:57.016347  428896 provision.go:87] duration metric: took 454.750366ms to configureAuth
	I1115 09:31:57.016381  428896 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:31:57.016674  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:31:57.016833  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:57.052679  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:57.053005  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1115 09:31:57.053029  428896 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:31:57.426092  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:31:57.426126  428896 machine.go:97] duration metric: took 4.374809168s to provisionDockerMachine
	I1115 09:31:57.426140  428896 start.go:293] postStartSetup for "ha-577290-m02" (driver="docker")
	I1115 09:31:57.426151  428896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:31:57.426220  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:31:57.426262  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:57.448519  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m02/id_rsa Username:docker}
	I1115 09:31:57.545209  428896 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:31:57.549384  428896 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:31:57.549439  428896 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:31:57.549452  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:31:57.549519  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:31:57.549596  428896 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:31:57.549608  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /etc/ssl/certs/3590632.pem
	I1115 09:31:57.549687  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:31:57.558189  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:31:57.580235  428896 start.go:296] duration metric: took 154.07621ms for postStartSetup
	I1115 09:31:57.580333  428896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:31:57.580386  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:57.603433  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m02/id_rsa Username:docker}
	I1115 09:31:57.701219  428896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:31:57.706336  428896 fix.go:56] duration metric: took 5.028989139s for fixHost
	I1115 09:31:57.706368  428896 start.go:83] releasing machines lock for "ha-577290-m02", held for 5.029048241s
	I1115 09:31:57.706470  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m02
	I1115 09:31:57.727402  428896 out.go:179] * Found network options:
	I1115 09:31:57.728724  428896 out.go:179]   - NO_PROXY=192.168.49.2
	W1115 09:31:57.729967  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:31:57.730005  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 09:31:57.730073  428896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:31:57.730128  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:57.730159  428896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:31:57.730230  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:57.748817  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m02/id_rsa Username:docker}
	I1115 09:31:57.750362  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m02/id_rsa Username:docker}
	I1115 09:31:57.903068  428896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:31:57.937805  428896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:31:57.937874  428896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:31:57.947024  428896 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 09:31:57.947053  428896 start.go:496] detecting cgroup driver to use...
	I1115 09:31:57.947136  428896 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:31:57.947208  428896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:31:57.963666  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:31:57.976613  428896 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:31:57.976675  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:31:57.991891  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:31:58.006003  428896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:31:58.153545  428896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:31:58.310509  428896 docker.go:234] disabling docker service ...
	I1115 09:31:58.310582  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:31:58.330775  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:31:58.348091  428896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:31:58.501312  428896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:31:58.629095  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:31:58.643176  428896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:31:58.658526  428896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:31:58.658590  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.668426  428896 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:31:58.668483  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.679145  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.689023  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.698596  428896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:31:58.707252  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.717022  428896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.726715  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.735906  428896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:31:58.743685  428896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:31:58.751568  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:31:58.887672  428896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:33:29.141191  428896 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.253455227s)
	I1115 09:33:29.141240  428896 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:33:29.141300  428896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:33:29.145595  428896 start.go:564] Will wait 60s for crictl version
	I1115 09:33:29.145655  428896 ssh_runner.go:195] Run: which crictl
	I1115 09:33:29.149342  428896 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:33:29.174182  428896 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:33:29.174254  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:29.204881  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:29.236181  428896 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:33:29.237785  428896 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 09:33:29.239150  428896 cli_runner.go:164] Run: docker network inspect ha-577290 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:33:29.257605  428896 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:33:29.262168  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:29.273241  428896 mustload.go:66] Loading cluster: ha-577290
	I1115 09:33:29.273540  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:29.273770  428896 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:33:29.291615  428896 host.go:66] Checking if "ha-577290" exists ...
	I1115 09:33:29.291888  428896 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290 for IP: 192.168.49.3
	I1115 09:33:29.291900  428896 certs.go:195] generating shared ca certs ...
	I1115 09:33:29.291916  428896 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:29.292078  428896 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:33:29.292119  428896 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:33:29.292129  428896 certs.go:257] generating profile certs ...
	I1115 09:33:29.292200  428896 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key
	I1115 09:33:29.292255  428896 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.c5636f69
	I1115 09:33:29.292289  428896 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key
	I1115 09:33:29.292300  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 09:33:29.292314  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 09:33:29.292326  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 09:33:29.292338  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 09:33:29.292352  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 09:33:29.292367  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 09:33:29.292387  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 09:33:29.292421  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 09:33:29.292481  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:33:29.292511  428896 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:33:29.292522  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:33:29.292544  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:33:29.292568  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:33:29.292596  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:33:29.292645  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:33:29.292674  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:29.292685  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem -> /usr/share/ca-certificates/359063.pem
	I1115 09:33:29.292705  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /usr/share/ca-certificates/3590632.pem
	I1115 09:33:29.292756  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:33:29.311158  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:33:29.397746  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1115 09:33:29.402107  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1115 09:33:29.410807  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1115 09:33:29.414570  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1115 09:33:29.423209  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1115 09:33:29.426969  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1115 09:33:29.435369  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1115 09:33:29.439110  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1115 09:33:29.447938  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1115 09:33:29.451581  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1115 09:33:29.460040  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1115 09:33:29.463847  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1115 09:33:29.472802  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:33:29.491640  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:33:29.509789  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:33:29.527041  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:33:29.544384  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 09:33:29.562153  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:33:29.580258  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:33:29.598677  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 09:33:29.616730  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:33:29.635496  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:33:29.653811  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:33:29.671993  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1115 09:33:29.684693  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1115 09:33:29.697982  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1115 09:33:29.710750  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1115 09:33:29.723405  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1115 09:33:29.735786  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1115 09:33:29.748861  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1115 09:33:29.761801  428896 ssh_runner.go:195] Run: openssl version
	I1115 09:33:29.768042  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:33:29.777574  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:33:29.781659  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:33:29.781740  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:33:29.817272  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:33:29.826567  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:33:29.836067  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:29.839987  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:29.840045  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:29.875123  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:33:29.884911  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:33:29.893650  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:33:29.897547  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:33:29.897614  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:33:29.933220  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:33:29.942015  428896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:33:29.946107  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 09:33:29.981924  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 09:33:30.017346  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 09:33:30.055728  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 09:33:30.091801  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 09:33:30.128083  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 09:33:30.165477  428896 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1115 09:33:30.165602  428896 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-577290-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:33:30.165633  428896 kube-vip.go:115] generating kube-vip config ...
	I1115 09:33:30.165686  428896 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 09:33:30.178477  428896 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:33:30.178550  428896 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 09:33:30.178626  428896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:33:30.187181  428896 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:33:30.187255  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1115 09:33:30.195966  428896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:33:30.209403  428896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:33:30.222151  428896 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 09:33:30.235250  428896 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 09:33:30.239303  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:30.249724  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:30.355117  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:30.368971  428896 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:33:30.369229  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:30.370723  428896 out.go:179] * Verifying Kubernetes components...
	I1115 09:33:30.372269  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:30.476752  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:30.491166  428896 kapi.go:59] client config for ha-577290: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 09:33:30.491243  428896 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 09:33:30.491612  428896 node_ready.go:35] waiting up to 6m0s for node "ha-577290-m02" to be "Ready" ...
	W1115 09:33:32.494974  428896 node_ready.go:57] node "ha-577290-m02" has "Ready":"False" status (will retry)
	W1115 09:33:34.495865  428896 node_ready.go:57] node "ha-577290-m02" has "Ready":"False" status (will retry)
	W1115 09:33:36.995901  428896 node_ready.go:57] node "ha-577290-m02" has "Ready":"False" status (will retry)
	W1115 09:33:39.495623  428896 node_ready.go:57] node "ha-577290-m02" has "Ready":"False" status (will retry)
	I1115 09:33:40.495728  428896 node_ready.go:49] node "ha-577290-m02" is "Ready"
	I1115 09:33:40.495762  428896 node_ready.go:38] duration metric: took 10.004119226s for node "ha-577290-m02" to be "Ready" ...
	I1115 09:33:40.495779  428896 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:33:40.495830  428896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:33:40.508005  428896 api_server.go:72] duration metric: took 10.138962389s to wait for apiserver process to appear ...
	I1115 09:33:40.508034  428896 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:33:40.508058  428896 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 09:33:40.513137  428896 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 09:33:40.514147  428896 api_server.go:141] control plane version: v1.34.1
	I1115 09:33:40.514171  428896 api_server.go:131] duration metric: took 6.130383ms to wait for apiserver health ...
	I1115 09:33:40.514180  428896 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:33:40.521806  428896 system_pods.go:59] 26 kube-system pods found
	I1115 09:33:40.521847  428896 system_pods.go:61] "coredns-66bc5c9577-hcps6" [61783521-de69-4669-874c-b0a260551902] Running
	I1115 09:33:40.521853  428896 system_pods.go:61] "coredns-66bc5c9577-xqpdq" [929b4b9a-8741-413f-939e-68c92781b1eb] Running
	I1115 09:33:40.521857  428896 system_pods.go:61] "etcd-ha-577290" [3ab153af-3774-4da4-a72e-323d14056944] Running
	I1115 09:33:40.521860  428896 system_pods.go:61] "etcd-ha-577290-m02" [146e26b0-996a-4cf6-a1ac-4e50fc799d1e] Running
	I1115 09:33:40.521865  428896 system_pods.go:61] "etcd-ha-577290-m03" [c61afa72-7aa1-42b1-9844-ae2295e52813] Running
	I1115 09:33:40.521868  428896 system_pods.go:61] "kindnet-7xtwk" [82d2cc3a-bb9c-4fdd-8975-8c804cc2c4d3] Running
	I1115 09:33:40.521871  428896 system_pods.go:61] "kindnet-dsj4t" [73dc267e-1872-43d0-97a0-6dfffe4327ab] Running
	I1115 09:33:40.521877  428896 system_pods.go:61] "kindnet-k8kmn" [350338b0-7cd1-4a6e-8608-b9b16b4a5cac] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 09:33:40.521888  428896 system_pods.go:61] "kindnet-ltfl5" [d3873196-930a-44bb-87f0-684c93025bdc] Running
	I1115 09:33:40.521903  428896 system_pods.go:61] "kube-apiserver-ha-577290" [a23f028c-3c3b-4b50-a859-2624a47cf37e] Running
	I1115 09:33:40.521907  428896 system_pods.go:61] "kube-apiserver-ha-577290-m02" [d6fb6ef6-4266-45e7-93c3-76c5ff31c0c5] Running
	I1115 09:33:40.521910  428896 system_pods.go:61] "kube-apiserver-ha-577290-m03" [23b73095-c581-4178-be4c-26dd08f8d4dc] Running
	I1115 09:33:40.521913  428896 system_pods.go:61] "kube-controller-manager-ha-577290" [f28c8e92-79ec-45ba-87a1-f07151431d5c] Running
	I1115 09:33:40.521917  428896 system_pods.go:61] "kube-controller-manager-ha-577290-m02" [8daa249c-7866-4ad3-bd2f-aa94ef222eb7] Running
	I1115 09:33:40.521922  428896 system_pods.go:61] "kube-controller-manager-ha-577290-m03" [53c1116a-ca9c-4f6a-a317-2159d25ae09c] Running
	I1115 09:33:40.521926  428896 system_pods.go:61] "kube-proxy-4j6b5" [67899ff8-aa1a-41d8-b7a3-4fea91a10fa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 09:33:40.521929  428896 system_pods.go:61] "kube-proxy-6mkwq" [e2ddd593-d255-4f3d-b008-72b920167540] Running
	I1115 09:33:40.521932  428896 system_pods.go:61] "kube-proxy-k6gmr" [9f25b23c-212d-4987-9d75-335a513ad8c2] Running
	I1115 09:33:40.521935  428896 system_pods.go:61] "kube-proxy-zkk5v" [57c4c9d1-9a69-4190-a1cc-0036d422972c] Running
	I1115 09:33:40.521938  428896 system_pods.go:61] "kube-scheduler-ha-577290" [09b6d338-2eb4-469c-ae21-a8e58b9c4622] Running
	I1115 09:33:40.521941  428896 system_pods.go:61] "kube-scheduler-ha-577290-m02" [7b3d6e56-319c-492f-8197-fb4c6c883fed] Running
	I1115 09:33:40.521943  428896 system_pods.go:61] "kube-scheduler-ha-577290-m03" [6d9b1eb9-2fa8-4bd5-b0a2-fa1b45c93b7e] Running
	I1115 09:33:40.521947  428896 system_pods.go:61] "kube-vip-ha-577290" [b451c58a-b25d-4697-b9c5-7e2fc03cea67] Running
	I1115 09:33:40.521951  428896 system_pods.go:61] "kube-vip-ha-577290-m02" [057ddd08-41fa-4738-a72c-a91a4e004fb1] Running
	I1115 09:33:40.521953  428896 system_pods.go:61] "kube-vip-ha-577290-m03" [7aaee1aa-2771-45e7-b0af-5c28f8c8a227] Running
	I1115 09:33:40.521956  428896 system_pods.go:61] "storage-provisioner" [c6bdc68a-8f6a-4b01-a166-66128641846b] Running
	I1115 09:33:40.521962  428896 system_pods.go:74] duration metric: took 7.776979ms to wait for pod list to return data ...
	I1115 09:33:40.521973  428896 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:33:40.525281  428896 default_sa.go:45] found service account: "default"
	I1115 09:33:40.525304  428896 default_sa.go:55] duration metric: took 3.325885ms for default service account to be created ...
	I1115 09:33:40.525314  428896 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:33:40.532899  428896 system_pods.go:86] 26 kube-system pods found
	I1115 09:33:40.532942  428896 system_pods.go:89] "coredns-66bc5c9577-hcps6" [61783521-de69-4669-874c-b0a260551902] Running
	I1115 09:33:40.532948  428896 system_pods.go:89] "coredns-66bc5c9577-xqpdq" [929b4b9a-8741-413f-939e-68c92781b1eb] Running
	I1115 09:33:40.532952  428896 system_pods.go:89] "etcd-ha-577290" [3ab153af-3774-4da4-a72e-323d14056944] Running
	I1115 09:33:40.532955  428896 system_pods.go:89] "etcd-ha-577290-m02" [146e26b0-996a-4cf6-a1ac-4e50fc799d1e] Running
	I1115 09:33:40.532958  428896 system_pods.go:89] "etcd-ha-577290-m03" [c61afa72-7aa1-42b1-9844-ae2295e52813] Running
	I1115 09:33:40.532962  428896 system_pods.go:89] "kindnet-7xtwk" [82d2cc3a-bb9c-4fdd-8975-8c804cc2c4d3] Running
	I1115 09:33:40.532965  428896 system_pods.go:89] "kindnet-dsj4t" [73dc267e-1872-43d0-97a0-6dfffe4327ab] Running
	I1115 09:33:40.532972  428896 system_pods.go:89] "kindnet-k8kmn" [350338b0-7cd1-4a6e-8608-b9b16b4a5cac] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 09:33:40.532980  428896 system_pods.go:89] "kindnet-ltfl5" [d3873196-930a-44bb-87f0-684c93025bdc] Running
	I1115 09:33:40.532985  428896 system_pods.go:89] "kube-apiserver-ha-577290" [a23f028c-3c3b-4b50-a859-2624a47cf37e] Running
	I1115 09:33:40.532988  428896 system_pods.go:89] "kube-apiserver-ha-577290-m02" [d6fb6ef6-4266-45e7-93c3-76c5ff31c0c5] Running
	I1115 09:33:40.532991  428896 system_pods.go:89] "kube-apiserver-ha-577290-m03" [23b73095-c581-4178-be4c-26dd08f8d4dc] Running
	I1115 09:33:40.532997  428896 system_pods.go:89] "kube-controller-manager-ha-577290" [f28c8e92-79ec-45ba-87a1-f07151431d5c] Running
	I1115 09:33:40.533001  428896 system_pods.go:89] "kube-controller-manager-ha-577290-m02" [8daa249c-7866-4ad3-bd2f-aa94ef222eb7] Running
	I1115 09:33:40.533007  428896 system_pods.go:89] "kube-controller-manager-ha-577290-m03" [53c1116a-ca9c-4f6a-a317-2159d25ae09c] Running
	I1115 09:33:40.533012  428896 system_pods.go:89] "kube-proxy-4j6b5" [67899ff8-aa1a-41d8-b7a3-4fea91a10fa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 09:33:40.533018  428896 system_pods.go:89] "kube-proxy-6mkwq" [e2ddd593-d255-4f3d-b008-72b920167540] Running
	I1115 09:33:40.533022  428896 system_pods.go:89] "kube-proxy-k6gmr" [9f25b23c-212d-4987-9d75-335a513ad8c2] Running
	I1115 09:33:40.533027  428896 system_pods.go:89] "kube-proxy-zkk5v" [57c4c9d1-9a69-4190-a1cc-0036d422972c] Running
	I1115 09:33:40.533030  428896 system_pods.go:89] "kube-scheduler-ha-577290" [09b6d338-2eb4-469c-ae21-a8e58b9c4622] Running
	I1115 09:33:40.533033  428896 system_pods.go:89] "kube-scheduler-ha-577290-m02" [7b3d6e56-319c-492f-8197-fb4c6c883fed] Running
	I1115 09:33:40.533036  428896 system_pods.go:89] "kube-scheduler-ha-577290-m03" [6d9b1eb9-2fa8-4bd5-b0a2-fa1b45c93b7e] Running
	I1115 09:33:40.533039  428896 system_pods.go:89] "kube-vip-ha-577290" [b451c58a-b25d-4697-b9c5-7e2fc03cea67] Running
	I1115 09:33:40.533042  428896 system_pods.go:89] "kube-vip-ha-577290-m02" [057ddd08-41fa-4738-a72c-a91a4e004fb1] Running
	I1115 09:33:40.533047  428896 system_pods.go:89] "kube-vip-ha-577290-m03" [7aaee1aa-2771-45e7-b0af-5c28f8c8a227] Running
	I1115 09:33:40.533052  428896 system_pods.go:89] "storage-provisioner" [c6bdc68a-8f6a-4b01-a166-66128641846b] Running
	I1115 09:33:40.533059  428896 system_pods.go:126] duration metric: took 7.740388ms to wait for k8s-apps to be running ...
	I1115 09:33:40.533069  428896 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:33:40.533115  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:33:40.546948  428896 system_svc.go:56] duration metric: took 13.851414ms WaitForService to wait for kubelet
	I1115 09:33:40.546981  428896 kubeadm.go:587] duration metric: took 10.17796689s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:33:40.547004  428896 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:33:40.550887  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:40.550928  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:40.550955  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:40.550959  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:40.550963  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:40.550966  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:40.550969  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:40.550972  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:40.550976  428896 node_conditions.go:105] duration metric: took 3.967331ms to run NodePressure ...
	I1115 09:33:40.550987  428896 start.go:242] waiting for startup goroutines ...
	I1115 09:33:40.551013  428896 start.go:256] writing updated cluster config ...
	I1115 09:33:40.553290  428896 out.go:203] 
	I1115 09:33:40.555010  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:40.555154  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:40.556732  428896 out.go:179] * Starting "ha-577290-m03" control-plane node in "ha-577290" cluster
	I1115 09:33:40.558293  428896 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:33:40.559533  428896 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:33:40.560557  428896 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:33:40.560573  428896 cache.go:65] Caching tarball of preloaded images
	I1115 09:33:40.560658  428896 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:33:40.560677  428896 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:33:40.560686  428896 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:33:40.560802  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:40.581841  428896 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:33:40.581862  428896 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:33:40.581881  428896 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:33:40.581911  428896 start.go:360] acquireMachinesLock for ha-577290-m03: {Name:mk956e932a0a61462f744b4bf6dccfcc158f1677 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:33:40.581975  428896 start.go:364] duration metric: took 45.083µs to acquireMachinesLock for "ha-577290-m03"
	I1115 09:33:40.582000  428896 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:33:40.582009  428896 fix.go:54] fixHost starting: m03
	I1115 09:33:40.582213  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m03 --format={{.State.Status}}
	I1115 09:33:40.599708  428896 fix.go:112] recreateIfNeeded on ha-577290-m03: state=Stopped err=<nil>
	W1115 09:33:40.599741  428896 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:33:40.601856  428896 out.go:252] * Restarting existing docker container for "ha-577290-m03" ...
	I1115 09:33:40.601929  428896 cli_runner.go:164] Run: docker start ha-577290-m03
	I1115 09:33:40.883039  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m03 --format={{.State.Status}}
	I1115 09:33:40.902259  428896 kic.go:430] container "ha-577290-m03" state is running.
	I1115 09:33:40.902730  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m03
	I1115 09:33:40.923104  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:40.923365  428896 machine.go:94] provisionDockerMachine start ...
	I1115 09:33:40.923449  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:40.942829  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:40.943125  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1115 09:33:40.943143  428896 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:33:40.943747  428896 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45594->127.0.0.1:33194: read: connection reset by peer
	I1115 09:33:44.097198  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m03
	
	I1115 09:33:44.097227  428896 ubuntu.go:182] provisioning hostname "ha-577290-m03"
	I1115 09:33:44.097294  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:44.119447  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:44.119771  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1115 09:33:44.119790  428896 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-577290-m03 && echo "ha-577290-m03" | sudo tee /etc/hostname
	I1115 09:33:44.272682  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m03
	
	I1115 09:33:44.272754  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:44.292482  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:44.292709  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1115 09:33:44.292725  428896 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-577290-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-577290-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-577290-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:33:44.427118  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:33:44.427153  428896 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:33:44.427180  428896 ubuntu.go:190] setting up certificates
	I1115 09:33:44.427192  428896 provision.go:84] configureAuth start
	I1115 09:33:44.427251  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m03
	I1115 09:33:44.449125  428896 provision.go:143] copyHostCerts
	I1115 09:33:44.449170  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:33:44.449207  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:33:44.449220  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:33:44.449315  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:33:44.449479  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:33:44.449519  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:33:44.449527  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:33:44.449580  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:33:44.449658  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:33:44.449684  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:33:44.449692  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:33:44.449729  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:33:44.449848  428896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.ha-577290-m03 san=[127.0.0.1 192.168.49.4 ha-577290-m03 localhost minikube]
	I1115 09:33:44.532362  428896 provision.go:177] copyRemoteCerts
	I1115 09:33:44.532433  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:33:44.532473  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:44.550652  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:33:44.646162  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 09:33:44.646224  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:33:44.664161  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 09:33:44.664221  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:33:44.683656  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 09:33:44.683729  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 09:33:44.709533  428896 provision.go:87] duration metric: took 282.323517ms to configureAuth
	I1115 09:33:44.709568  428896 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:33:44.709953  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:44.710431  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:44.730924  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:44.731134  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1115 09:33:44.731151  428896 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:33:45.072969  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:33:45.073007  428896 machine.go:97] duration metric: took 4.149624743s to provisionDockerMachine
	I1115 09:33:45.073028  428896 start.go:293] postStartSetup for "ha-577290-m03" (driver="docker")
	I1115 09:33:45.073041  428896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:33:45.073117  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:33:45.073164  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:45.096852  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:33:45.197468  428896 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:33:45.201750  428896 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:33:45.201783  428896 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:33:45.201797  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:33:45.201858  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:33:45.201951  428896 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:33:45.201963  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /etc/ssl/certs/3590632.pem
	I1115 09:33:45.202075  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:33:45.210217  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:33:45.228458  428896 start.go:296] duration metric: took 155.41494ms for postStartSetup
	I1115 09:33:45.228526  428896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:33:45.228575  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:45.246932  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:33:45.337973  428896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:33:45.343136  428896 fix.go:56] duration metric: took 4.76111959s for fixHost
	I1115 09:33:45.343165  428896 start.go:83] releasing machines lock for "ha-577290-m03", held for 4.761175125s
	I1115 09:33:45.343237  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m03
	I1115 09:33:45.363267  428896 out.go:179] * Found network options:
	I1115 09:33:45.364603  428896 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1115 09:33:45.365919  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:45.365945  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:45.365965  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:45.365973  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 09:33:45.366049  428896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:33:45.366084  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:45.366197  428896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:33:45.366269  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:45.385469  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:33:45.385900  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:33:45.512144  428896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:33:45.539108  428896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:33:45.539183  428896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:33:45.548657  428896 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 09:33:45.548681  428896 start.go:496] detecting cgroup driver to use...
	I1115 09:33:45.548714  428896 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:33:45.548758  428896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:33:45.565030  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:33:45.578828  428896 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:33:45.578876  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:33:45.593659  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:33:45.606896  428896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:33:45.719282  428896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:33:45.833886  428896 docker.go:234] disabling docker service ...
	I1115 09:33:45.833972  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:33:45.849553  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:33:45.863178  428896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:33:46.002558  428896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:33:46.122751  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:33:46.135787  428896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:33:46.152335  428896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:33:46.152386  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.162211  428896 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:33:46.162288  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.172907  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.182146  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.191787  428896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:33:46.201198  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.211208  428896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.221525  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.231770  428896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:33:46.240242  428896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:33:46.248568  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:46.362978  428896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:33:46.529312  428896 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:33:46.529407  428896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:33:46.534021  428896 start.go:564] Will wait 60s for crictl version
	I1115 09:33:46.534084  428896 ssh_runner.go:195] Run: which crictl
	I1115 09:33:46.537777  428896 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:33:46.562624  428896 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:33:46.562720  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:46.593038  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:46.624612  428896 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:33:46.625782  428896 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 09:33:46.626701  428896 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1115 09:33:46.627918  428896 cli_runner.go:164] Run: docker network inspect ha-577290 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:33:46.647913  428896 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:33:46.652309  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:46.663365  428896 mustload.go:66] Loading cluster: ha-577290
	I1115 09:33:46.663617  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:46.663854  428896 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:33:46.683967  428896 host.go:66] Checking if "ha-577290" exists ...
	I1115 09:33:46.684227  428896 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290 for IP: 192.168.49.4
	I1115 09:33:46.684240  428896 certs.go:195] generating shared ca certs ...
	I1115 09:33:46.684254  428896 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:46.684373  428896 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:33:46.684442  428896 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:33:46.684456  428896 certs.go:257] generating profile certs ...
	I1115 09:33:46.684531  428896 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key
	I1115 09:33:46.684570  428896 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.4e419922
	I1115 09:33:46.684607  428896 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key
	I1115 09:33:46.684619  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 09:33:46.684635  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 09:33:46.684648  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 09:33:46.684658  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 09:33:46.684670  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 09:33:46.684682  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 09:33:46.684694  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 09:33:46.684703  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 09:33:46.684763  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:33:46.684793  428896 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:33:46.684803  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:33:46.684825  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:33:46.684845  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:33:46.684867  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:33:46.684981  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:33:46.685022  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem -> /usr/share/ca-certificates/359063.pem
	I1115 09:33:46.685039  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /usr/share/ca-certificates/3590632.pem
	I1115 09:33:46.685052  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:46.685102  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:33:46.704190  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:33:46.792775  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1115 09:33:46.797318  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1115 09:33:46.806208  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1115 09:33:46.810016  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1115 09:33:46.819830  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1115 09:33:46.823486  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1115 09:33:46.831939  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1115 09:33:46.835879  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1115 09:33:46.844637  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1115 09:33:46.848667  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1115 09:33:46.857507  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1115 09:33:46.861254  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1115 09:33:46.870691  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:33:46.890068  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:33:46.908762  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:33:46.928604  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:33:46.946771  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 09:33:46.966008  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:33:46.985099  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:33:47.004286  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 09:33:47.023701  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:33:47.044426  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:33:47.063586  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:33:47.083517  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1115 09:33:47.097148  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1115 09:33:47.110614  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1115 09:33:47.125289  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1115 09:33:47.139218  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1115 09:33:47.152613  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1115 09:33:47.167316  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1115 09:33:47.186848  428896 ssh_runner.go:195] Run: openssl version
	I1115 09:33:47.196607  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:33:47.208413  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:47.212323  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:47.212377  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:47.248988  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:33:47.257951  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:33:47.270018  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:33:47.276511  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:33:47.276612  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:33:47.315123  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:33:47.324272  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:33:47.333692  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:33:47.337850  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:33:47.337904  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:33:47.377447  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:33:47.386605  428896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:33:47.390885  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 09:33:47.428238  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 09:33:47.463635  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 09:33:47.500538  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 09:33:47.537928  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 09:33:47.573729  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 09:33:47.608297  428896 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1115 09:33:47.608438  428896 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-577290-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:33:47.608465  428896 kube-vip.go:115] generating kube-vip config ...
	I1115 09:33:47.608505  428896 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 09:33:47.621813  428896 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:33:47.621905  428896 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 09:33:47.621980  428896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:33:47.629857  428896 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:33:47.629945  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1115 09:33:47.638232  428896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:33:47.652261  428896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:33:47.666706  428896 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 09:33:47.681044  428896 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 09:33:47.685137  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:47.696618  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:47.811257  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:47.825255  428896 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:33:47.825603  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:47.827569  428896 out.go:179] * Verifying Kubernetes components...
	I1115 09:33:47.828637  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:47.945833  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:47.960377  428896 kapi.go:59] client config for ha-577290: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 09:33:47.960507  428896 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 09:33:47.960779  428896 node_ready.go:35] waiting up to 6m0s for node "ha-577290-m03" to be "Ready" ...
	I1115 09:33:47.964177  428896 node_ready.go:49] node "ha-577290-m03" is "Ready"
	I1115 09:33:47.964207  428896 node_ready.go:38] duration metric: took 3.409493ms for node "ha-577290-m03" to be "Ready" ...
	I1115 09:33:47.964220  428896 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:33:47.964274  428896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:33:47.976501  428896 api_server.go:72] duration metric: took 151.188832ms to wait for apiserver process to appear ...
	I1115 09:33:47.976526  428896 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:33:47.976549  428896 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 09:33:47.982576  428896 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 09:33:47.983614  428896 api_server.go:141] control plane version: v1.34.1
	I1115 09:33:47.983645  428896 api_server.go:131] duration metric: took 7.111217ms to wait for apiserver health ...
	I1115 09:33:47.983656  428896 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:33:47.990372  428896 system_pods.go:59] 26 kube-system pods found
	I1115 09:33:47.990422  428896 system_pods.go:61] "coredns-66bc5c9577-hcps6" [61783521-de69-4669-874c-b0a260551902] Running
	I1115 09:33:47.990429  428896 system_pods.go:61] "coredns-66bc5c9577-xqpdq" [929b4b9a-8741-413f-939e-68c92781b1eb] Running
	I1115 09:33:47.990435  428896 system_pods.go:61] "etcd-ha-577290" [3ab153af-3774-4da4-a72e-323d14056944] Running
	I1115 09:33:47.990441  428896 system_pods.go:61] "etcd-ha-577290-m02" [146e26b0-996a-4cf6-a1ac-4e50fc799d1e] Running
	I1115 09:33:47.990450  428896 system_pods.go:61] "etcd-ha-577290-m03" [c61afa72-7aa1-42b1-9844-ae2295e52813] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 09:33:47.990461  428896 system_pods.go:61] "kindnet-7xtwk" [82d2cc3a-bb9c-4fdd-8975-8c804cc2c4d3] Running
	I1115 09:33:47.990470  428896 system_pods.go:61] "kindnet-dsj4t" [73dc267e-1872-43d0-97a0-6dfffe4327ab] Running
	I1115 09:33:47.990481  428896 system_pods.go:61] "kindnet-k8kmn" [350338b0-7cd1-4a6e-8608-b9b16b4a5cac] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 09:33:47.990487  428896 system_pods.go:61] "kindnet-ltfl5" [d3873196-930a-44bb-87f0-684c93025bdc] Running
	I1115 09:33:47.990493  428896 system_pods.go:61] "kube-apiserver-ha-577290" [a23f028c-3c3b-4b50-a859-2624a47cf37e] Running
	I1115 09:33:47.990498  428896 system_pods.go:61] "kube-apiserver-ha-577290-m02" [d6fb6ef6-4266-45e7-93c3-76c5ff31c0c5] Running
	I1115 09:33:47.990505  428896 system_pods.go:61] "kube-apiserver-ha-577290-m03" [23b73095-c581-4178-be4c-26dd08f8d4dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 09:33:47.990511  428896 system_pods.go:61] "kube-controller-manager-ha-577290" [f28c8e92-79ec-45ba-87a1-f07151431d5c] Running
	I1115 09:33:47.990517  428896 system_pods.go:61] "kube-controller-manager-ha-577290-m02" [8daa249c-7866-4ad3-bd2f-aa94ef222eb7] Running
	I1115 09:33:47.990526  428896 system_pods.go:61] "kube-controller-manager-ha-577290-m03" [53c1116a-ca9c-4f6a-a317-2159d25ae09c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 09:33:47.990535  428896 system_pods.go:61] "kube-proxy-4j6b5" [67899ff8-aa1a-41d8-b7a3-4fea91a10fa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 09:33:47.990541  428896 system_pods.go:61] "kube-proxy-6mkwq" [e2ddd593-d255-4f3d-b008-72b920167540] Running
	I1115 09:33:47.990544  428896 system_pods.go:61] "kube-proxy-k6gmr" [9f25b23c-212d-4987-9d75-335a513ad8c2] Running
	I1115 09:33:47.990549  428896 system_pods.go:61] "kube-proxy-zkk5v" [57c4c9d1-9a69-4190-a1cc-0036d422972c] Running
	I1115 09:33:47.990557  428896 system_pods.go:61] "kube-scheduler-ha-577290" [09b6d338-2eb4-469c-ae21-a8e58b9c4622] Running
	I1115 09:33:47.990562  428896 system_pods.go:61] "kube-scheduler-ha-577290-m02" [7b3d6e56-319c-492f-8197-fb4c6c883fed] Running
	I1115 09:33:47.990570  428896 system_pods.go:61] "kube-scheduler-ha-577290-m03" [6d9b1eb9-2fa8-4bd5-b0a2-fa1b45c93b7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 09:33:47.990578  428896 system_pods.go:61] "kube-vip-ha-577290" [b451c58a-b25d-4697-b9c5-7e2fc03cea67] Running
	I1115 09:33:47.990584  428896 system_pods.go:61] "kube-vip-ha-577290-m02" [057ddd08-41fa-4738-a72c-a91a4e004fb1] Running
	I1115 09:33:47.990592  428896 system_pods.go:61] "kube-vip-ha-577290-m03" [7aaee1aa-2771-45e7-b0af-5c28f8c8a227] Running
	I1115 09:33:47.990597  428896 system_pods.go:61] "storage-provisioner" [c6bdc68a-8f6a-4b01-a166-66128641846b] Running
	I1115 09:33:47.990604  428896 system_pods.go:74] duration metric: took 6.940099ms to wait for pod list to return data ...
	I1115 09:33:47.990618  428896 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:33:47.993458  428896 default_sa.go:45] found service account: "default"
	I1115 09:33:47.993482  428896 default_sa.go:55] duration metric: took 2.857379ms for default service account to be created ...
	I1115 09:33:47.993492  428896 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:33:47.999362  428896 system_pods.go:86] 26 kube-system pods found
	I1115 09:33:47.999436  428896 system_pods.go:89] "coredns-66bc5c9577-hcps6" [61783521-de69-4669-874c-b0a260551902] Running
	I1115 09:33:47.999446  428896 system_pods.go:89] "coredns-66bc5c9577-xqpdq" [929b4b9a-8741-413f-939e-68c92781b1eb] Running
	I1115 09:33:47.999452  428896 system_pods.go:89] "etcd-ha-577290" [3ab153af-3774-4da4-a72e-323d14056944] Running
	I1115 09:33:47.999467  428896 system_pods.go:89] "etcd-ha-577290-m02" [146e26b0-996a-4cf6-a1ac-4e50fc799d1e] Running
	I1115 09:33:47.999481  428896 system_pods.go:89] "etcd-ha-577290-m03" [c61afa72-7aa1-42b1-9844-ae2295e52813] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 09:33:47.999488  428896 system_pods.go:89] "kindnet-7xtwk" [82d2cc3a-bb9c-4fdd-8975-8c804cc2c4d3] Running
	I1115 09:33:47.999498  428896 system_pods.go:89] "kindnet-dsj4t" [73dc267e-1872-43d0-97a0-6dfffe4327ab] Running
	I1115 09:33:47.999510  428896 system_pods.go:89] "kindnet-k8kmn" [350338b0-7cd1-4a6e-8608-b9b16b4a5cac] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 09:33:47.999520  428896 system_pods.go:89] "kindnet-ltfl5" [d3873196-930a-44bb-87f0-684c93025bdc] Running
	I1115 09:33:47.999527  428896 system_pods.go:89] "kube-apiserver-ha-577290" [a23f028c-3c3b-4b50-a859-2624a47cf37e] Running
	I1115 09:33:47.999536  428896 system_pods.go:89] "kube-apiserver-ha-577290-m02" [d6fb6ef6-4266-45e7-93c3-76c5ff31c0c5] Running
	I1115 09:33:47.999544  428896 system_pods.go:89] "kube-apiserver-ha-577290-m03" [23b73095-c581-4178-be4c-26dd08f8d4dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 09:33:47.999553  428896 system_pods.go:89] "kube-controller-manager-ha-577290" [f28c8e92-79ec-45ba-87a1-f07151431d5c] Running
	I1115 09:33:47.999561  428896 system_pods.go:89] "kube-controller-manager-ha-577290-m02" [8daa249c-7866-4ad3-bd2f-aa94ef222eb7] Running
	I1115 09:33:47.999573  428896 system_pods.go:89] "kube-controller-manager-ha-577290-m03" [53c1116a-ca9c-4f6a-a317-2159d25ae09c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 09:33:47.999586  428896 system_pods.go:89] "kube-proxy-4j6b5" [67899ff8-aa1a-41d8-b7a3-4fea91a10fa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 09:33:47.999594  428896 system_pods.go:89] "kube-proxy-6mkwq" [e2ddd593-d255-4f3d-b008-72b920167540] Running
	I1115 09:33:47.999602  428896 system_pods.go:89] "kube-proxy-k6gmr" [9f25b23c-212d-4987-9d75-335a513ad8c2] Running
	I1115 09:33:47.999608  428896 system_pods.go:89] "kube-proxy-zkk5v" [57c4c9d1-9a69-4190-a1cc-0036d422972c] Running
	I1115 09:33:47.999615  428896 system_pods.go:89] "kube-scheduler-ha-577290" [09b6d338-2eb4-469c-ae21-a8e58b9c4622] Running
	I1115 09:33:47.999623  428896 system_pods.go:89] "kube-scheduler-ha-577290-m02" [7b3d6e56-319c-492f-8197-fb4c6c883fed] Running
	I1115 09:33:47.999633  428896 system_pods.go:89] "kube-scheduler-ha-577290-m03" [6d9b1eb9-2fa8-4bd5-b0a2-fa1b45c93b7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 09:33:47.999642  428896 system_pods.go:89] "kube-vip-ha-577290" [b451c58a-b25d-4697-b9c5-7e2fc03cea67] Running
	I1115 09:33:47.999654  428896 system_pods.go:89] "kube-vip-ha-577290-m02" [057ddd08-41fa-4738-a72c-a91a4e004fb1] Running
	I1115 09:33:47.999660  428896 system_pods.go:89] "kube-vip-ha-577290-m03" [7aaee1aa-2771-45e7-b0af-5c28f8c8a227] Running
	I1115 09:33:47.999665  428896 system_pods.go:89] "storage-provisioner" [c6bdc68a-8f6a-4b01-a166-66128641846b] Running
	I1115 09:33:47.999676  428896 system_pods.go:126] duration metric: took 6.175615ms to wait for k8s-apps to be running ...
	I1115 09:33:47.999689  428896 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:33:47.999747  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:33:48.013321  428896 system_svc.go:56] duration metric: took 13.620486ms WaitForService to wait for kubelet
	I1115 09:33:48.013354  428896 kubeadm.go:587] duration metric: took 188.047542ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:33:48.013372  428896 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:33:48.017378  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:48.017414  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:48.017429  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:48.017435  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:48.017440  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:48.017446  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:48.017451  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:48.017456  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:48.017465  428896 node_conditions.go:105] duration metric: took 4.087504ms to run NodePressure ...
	I1115 09:33:48.017479  428896 start.go:242] waiting for startup goroutines ...
	I1115 09:33:48.017513  428896 start.go:256] writing updated cluster config ...
	I1115 09:33:48.019414  428896 out.go:203] 
	I1115 09:33:48.021095  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:48.021213  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:48.022801  428896 out.go:179] * Starting "ha-577290-m04" worker node in "ha-577290" cluster
	I1115 09:33:48.023813  428896 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:33:48.025033  428896 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:33:48.026034  428896 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:33:48.026051  428896 cache.go:65] Caching tarball of preloaded images
	I1115 09:33:48.026126  428896 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:33:48.026161  428896 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:33:48.026176  428896 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:33:48.026313  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:48.048674  428896 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:33:48.048695  428896 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:33:48.048712  428896 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:33:48.048737  428896 start.go:360] acquireMachinesLock for ha-577290-m04: {Name:mk727375190f43e7b9d23177818f3e0fe7e90632 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:33:48.048792  428896 start.go:364] duration metric: took 39.722µs to acquireMachinesLock for "ha-577290-m04"
	I1115 09:33:48.048810  428896 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:33:48.048817  428896 fix.go:54] fixHost starting: m04
	I1115 09:33:48.049018  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m04 --format={{.State.Status}}
	I1115 09:33:48.066458  428896 fix.go:112] recreateIfNeeded on ha-577290-m04: state=Stopped err=<nil>
	W1115 09:33:48.066487  428896 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:33:48.068426  428896 out.go:252] * Restarting existing docker container for "ha-577290-m04" ...
	I1115 09:33:48.068502  428896 cli_runner.go:164] Run: docker start ha-577290-m04
	I1115 09:33:48.374025  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m04 --format={{.State.Status}}
	I1115 09:33:48.394334  428896 kic.go:430] container "ha-577290-m04" state is running.
	I1115 09:33:48.394855  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m04
	I1115 09:33:48.414950  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:48.415224  428896 machine.go:94] provisionDockerMachine start ...
	I1115 09:33:48.415304  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:48.436207  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:48.436464  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1115 09:33:48.436478  428896 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:33:48.437107  428896 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46302->127.0.0.1:33199: read: connection reset by peer
	I1115 09:33:51.570007  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m04
	
	I1115 09:33:51.570038  428896 ubuntu.go:182] provisioning hostname "ha-577290-m04"
	I1115 09:33:51.570109  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:51.589648  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:51.589938  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1115 09:33:51.589956  428896 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-577290-m04 && echo "ha-577290-m04" | sudo tee /etc/hostname
	I1115 09:33:51.730555  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m04
	
	I1115 09:33:51.730652  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:51.749427  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:51.749732  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1115 09:33:51.749758  428896 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-577290-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-577290-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-577290-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:33:51.881659  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:33:51.881699  428896 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:33:51.881721  428896 ubuntu.go:190] setting up certificates
	I1115 09:33:51.881735  428896 provision.go:84] configureAuth start
	I1115 09:33:51.881795  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m04
	I1115 09:33:51.905477  428896 provision.go:143] copyHostCerts
	I1115 09:33:51.905520  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:33:51.905560  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:33:51.905565  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:33:51.905636  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:33:51.905713  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:33:51.905742  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:33:51.905749  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:33:51.905780  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:33:51.905850  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:33:51.905881  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:33:51.905887  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:33:51.905918  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:33:51.905994  428896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.ha-577290-m04 san=[127.0.0.1 192.168.49.5 ha-577290-m04 localhost minikube]
	I1115 09:33:52.709519  428896 provision.go:177] copyRemoteCerts
	I1115 09:33:52.709588  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:33:52.709639  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:52.729670  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:33:52.827014  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 09:33:52.827074  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:33:52.845307  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 09:33:52.845373  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 09:33:52.864228  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 09:33:52.864311  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:33:52.882736  428896 provision.go:87] duration metric: took 1.000983567s to configureAuth
	I1115 09:33:52.882768  428896 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:33:52.882985  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:52.883086  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:52.901749  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:52.901964  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1115 09:33:52.901980  428896 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:33:53.158344  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:33:53.158378  428896 machine.go:97] duration metric: took 4.74313086s to provisionDockerMachine
	I1115 09:33:53.158427  428896 start.go:293] postStartSetup for "ha-577290-m04" (driver="docker")
	I1115 09:33:53.158462  428896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:33:53.158540  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:33:53.158593  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:53.180692  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:33:53.278677  428896 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:33:53.282826  428896 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:33:53.282861  428896 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:33:53.282950  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:33:53.283052  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:33:53.283142  428896 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:33:53.283157  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /etc/ssl/certs/3590632.pem
	I1115 09:33:53.283256  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:33:53.292307  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:33:53.311030  428896 start.go:296] duration metric: took 152.582175ms for postStartSetup
	I1115 09:33:53.311119  428896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:33:53.311155  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:53.330486  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:33:53.423358  428896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:33:53.428267  428896 fix.go:56] duration metric: took 5.379444169s for fixHost
	I1115 09:33:53.428291  428896 start.go:83] releasing machines lock for "ha-577290-m04", held for 5.379488718s
	I1115 09:33:53.428356  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m04
	I1115 09:33:53.450722  428896 out.go:179] * Found network options:
	I1115 09:33:53.452273  428896 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1115 09:33:53.453579  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:53.453607  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:53.453616  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:53.453643  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:53.453660  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:53.453674  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 09:33:53.453759  428896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:33:53.453807  428896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:33:53.453873  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:53.453813  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:53.472760  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:33:53.473149  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:33:53.627249  428896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:33:53.632573  428896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:33:53.632637  428896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:33:53.642178  428896 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 09:33:53.642206  428896 start.go:496] detecting cgroup driver to use...
	I1115 09:33:53.642240  428896 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:33:53.642300  428896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:33:53.657825  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:33:53.671742  428896 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:33:53.671815  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:33:53.687976  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:33:53.701149  428896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:33:53.785060  428896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:33:53.872517  428896 docker.go:234] disabling docker service ...
	I1115 09:33:53.872587  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:33:53.888847  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:33:53.902669  428896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:33:53.985655  428896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:33:54.076443  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:33:54.089637  428896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:33:54.104342  428896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:33:54.104514  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.113954  428896 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:33:54.114031  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.123713  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.133355  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.144683  428896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:33:54.153702  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.163284  428896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.172255  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.181589  428896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:33:54.189668  428896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:33:54.197336  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:54.288186  428896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:33:54.403383  428896 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:33:54.403492  428896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:33:54.407772  428896 start.go:564] Will wait 60s for crictl version
	I1115 09:33:54.407839  428896 ssh_runner.go:195] Run: which crictl
	I1115 09:33:54.411798  428896 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:33:54.438501  428896 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:33:54.438607  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:54.468561  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:54.499645  428896 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:33:54.501099  428896 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 09:33:54.502317  428896 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1115 09:33:54.503727  428896 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1115 09:33:54.505140  428896 cli_runner.go:164] Run: docker network inspect ha-577290 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:33:54.524109  428896 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:33:54.528569  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:54.539044  428896 mustload.go:66] Loading cluster: ha-577290
	I1115 09:33:54.539261  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:54.539487  428896 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:33:54.557777  428896 host.go:66] Checking if "ha-577290" exists ...
	I1115 09:33:54.558052  428896 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290 for IP: 192.168.49.5
	I1115 09:33:54.558069  428896 certs.go:195] generating shared ca certs ...
	I1115 09:33:54.558091  428896 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:54.558225  428896 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:33:54.558262  428896 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:33:54.558276  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 09:33:54.558292  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 09:33:54.558306  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 09:33:54.558319  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 09:33:54.558371  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:33:54.558419  428896 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:33:54.558431  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:33:54.558454  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:33:54.558475  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:33:54.558502  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:33:54.558543  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:33:54.558573  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem -> /usr/share/ca-certificates/359063.pem
	I1115 09:33:54.558586  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /usr/share/ca-certificates/3590632.pem
	I1115 09:33:54.558599  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:54.558619  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:33:54.581222  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:33:54.600809  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:33:54.619688  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:33:54.637947  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:33:54.657828  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:33:54.680584  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:33:54.710166  428896 ssh_runner.go:195] Run: openssl version
	I1115 09:33:54.717263  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:33:54.727158  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:54.731833  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:54.731883  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:54.768964  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:33:54.777707  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:33:54.787101  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:33:54.791155  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:33:54.791218  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:33:54.826198  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:33:54.835154  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:33:54.845054  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:33:54.849628  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:33:54.849691  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:33:54.888273  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:33:54.897198  428896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:33:54.901079  428896 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:33:54.901140  428896 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1115 09:33:54.901265  428896 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-577290-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:33:54.901334  428896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:33:54.910356  428896 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:33:54.910503  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1115 09:33:54.919713  428896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:33:54.934154  428896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:33:54.948279  428896 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 09:33:54.952666  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:54.964534  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:55.052727  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:55.067727  428896 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1115 09:33:55.068040  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:55.070111  428896 out.go:179] * Verifying Kubernetes components...
	I1115 09:33:55.071556  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:55.163626  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:55.178038  428896 kapi.go:59] client config for ha-577290: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 09:33:55.178107  428896 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 09:33:55.178364  428896 node_ready.go:35] waiting up to 6m0s for node "ha-577290-m04" to be "Ready" ...
	W1115 09:33:57.182074  428896 node_ready.go:57] node "ha-577290-m04" has "Ready":"Unknown" status (will retry)
	W1115 09:33:59.682695  428896 node_ready.go:57] node "ha-577290-m04" has "Ready":"Unknown" status (will retry)
	I1115 09:34:01.682637  428896 node_ready.go:49] node "ha-577290-m04" is "Ready"
	I1115 09:34:01.682668  428896 node_ready.go:38] duration metric: took 6.504287602s for node "ha-577290-m04" to be "Ready" ...
	I1115 09:34:01.682681  428896 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:34:01.682732  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:34:01.696758  428896 system_svc.go:56] duration metric: took 14.066869ms WaitForService to wait for kubelet
	I1115 09:34:01.696792  428896 kubeadm.go:587] duration metric: took 6.629025488s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:34:01.696815  428896 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:34:01.700561  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:34:01.700588  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:34:01.700599  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:34:01.700603  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:34:01.700606  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:34:01.700609  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:34:01.700612  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:34:01.700615  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:34:01.700619  428896 node_conditions.go:105] duration metric: took 3.798933ms to run NodePressure ...
	I1115 09:34:01.700630  428896 start.go:242] waiting for startup goroutines ...
	I1115 09:34:01.700652  428896 start.go:256] writing updated cluster config ...
	I1115 09:34:01.700940  428896 ssh_runner.go:195] Run: rm -f paused
	I1115 09:34:01.705190  428896 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:34:01.705690  428896 kapi.go:59] client config for ha-577290: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 09:34:01.714720  428896 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hcps6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.720476  428896 pod_ready.go:94] pod "coredns-66bc5c9577-hcps6" is "Ready"
	I1115 09:34:01.720506  428896 pod_ready.go:86] duration metric: took 5.756993ms for pod "coredns-66bc5c9577-hcps6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.720518  428896 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xqpdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.725758  428896 pod_ready.go:94] pod "coredns-66bc5c9577-xqpdq" is "Ready"
	I1115 09:34:01.725790  428896 pod_ready.go:86] duration metric: took 5.264346ms for pod "coredns-66bc5c9577-xqpdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.728618  428896 pod_ready.go:83] waiting for pod "etcd-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.733682  428896 pod_ready.go:94] pod "etcd-ha-577290" is "Ready"
	I1115 09:34:01.733713  428896 pod_ready.go:86] duration metric: took 5.068711ms for pod "etcd-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.733724  428896 pod_ready.go:83] waiting for pod "etcd-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.738674  428896 pod_ready.go:94] pod "etcd-ha-577290-m02" is "Ready"
	I1115 09:34:01.738702  428896 pod_ready.go:86] duration metric: took 4.96923ms for pod "etcd-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.738711  428896 pod_ready.go:83] waiting for pod "etcd-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.907175  428896 request.go:683] "Waited before sending request" delay="168.345879ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-577290-m03"
	I1115 09:34:02.106204  428896 request.go:683] "Waited before sending request" delay="195.32057ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m03"
	I1115 09:34:02.109590  428896 pod_ready.go:94] pod "etcd-ha-577290-m03" is "Ready"
	I1115 09:34:02.109621  428896 pod_ready.go:86] duration metric: took 370.905099ms for pod "etcd-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:02.307120  428896 request.go:683] "Waited before sending request" delay="197.367777ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1115 09:34:02.311497  428896 pod_ready.go:83] waiting for pod "kube-apiserver-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:02.506963  428896 request.go:683] "Waited before sending request" delay="195.356346ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-577290"
	I1115 09:34:02.706771  428896 request.go:683] "Waited before sending request" delay="196.448308ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290"
	I1115 09:34:02.710109  428896 pod_ready.go:94] pod "kube-apiserver-ha-577290" is "Ready"
	I1115 09:34:02.710139  428896 pod_ready.go:86] duration metric: took 398.612345ms for pod "kube-apiserver-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:02.710148  428896 pod_ready.go:83] waiting for pod "kube-apiserver-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:02.906594  428896 request.go:683] "Waited before sending request" delay="196.34557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-577290-m02"
	I1115 09:34:03.106336  428896 request.go:683] "Waited before sending request" delay="196.305201ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	I1115 09:34:03.109900  428896 pod_ready.go:94] pod "kube-apiserver-ha-577290-m02" is "Ready"
	I1115 09:34:03.109935  428896 pod_ready.go:86] duration metric: took 399.77994ms for pod "kube-apiserver-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:03.109947  428896 pod_ready.go:83] waiting for pod "kube-apiserver-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:03.306248  428896 request.go:683] "Waited before sending request" delay="196.205945ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-577290-m03"
	I1115 09:34:03.507032  428896 request.go:683] "Waited before sending request" delay="197.392595ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m03"
	I1115 09:34:03.509957  428896 pod_ready.go:94] pod "kube-apiserver-ha-577290-m03" is "Ready"
	I1115 09:34:03.509989  428896 pod_ready.go:86] duration metric: took 400.035581ms for pod "kube-apiserver-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:03.706553  428896 request.go:683] "Waited before sending request" delay="196.41245ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1115 09:34:03.710543  428896 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:03.907045  428896 request.go:683] "Waited before sending request" delay="196.330959ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-577290"
	I1115 09:34:04.106816  428896 request.go:683] "Waited before sending request" delay="196.427767ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290"
	I1115 09:34:04.110328  428896 pod_ready.go:94] pod "kube-controller-manager-ha-577290" is "Ready"
	I1115 09:34:04.110357  428896 pod_ready.go:86] duration metric: took 399.786401ms for pod "kube-controller-manager-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:04.110368  428896 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:04.306851  428896 request.go:683] "Waited before sending request" delay="196.351238ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-577290-m02"
	I1115 09:34:04.506506  428896 request.go:683] "Waited before sending request" delay="196.393036ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	I1115 09:34:04.509995  428896 pod_ready.go:94] pod "kube-controller-manager-ha-577290-m02" is "Ready"
	I1115 09:34:04.510025  428896 pod_ready.go:86] duration metric: took 399.650133ms for pod "kube-controller-manager-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:04.510034  428896 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:04.706646  428896 request.go:683] "Waited before sending request" delay="196.418062ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-577290-m03"
	I1115 09:34:04.906837  428896 request.go:683] "Waited before sending request" delay="196.369246ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m03"
	I1115 09:34:04.909799  428896 pod_ready.go:94] pod "kube-controller-manager-ha-577290-m03" is "Ready"
	I1115 09:34:04.909834  428896 pod_ready.go:86] duration metric: took 399.79293ms for pod "kube-controller-manager-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:05.106269  428896 request.go:683] "Waited before sending request" delay="196.284181ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1115 09:34:05.110078  428896 pod_ready.go:83] waiting for pod "kube-proxy-4j6b5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:05.306484  428896 request.go:683] "Waited before sending request" delay="196.226116ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4j6b5"
	I1115 09:34:05.506233  428896 request.go:683] "Waited before sending request" delay="196.286404ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	I1115 09:34:05.706640  428896 request.go:683] "Waited before sending request" delay="96.270262ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4j6b5"
	I1115 09:34:05.906700  428896 request.go:683] "Waited before sending request" delay="196.368708ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	I1115 09:34:06.306548  428896 request.go:683] "Waited before sending request" delay="192.368837ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	I1115 09:34:06.707117  428896 request.go:683] "Waited before sending request" delay="93.270622ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	W1115 09:34:07.116563  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:09.617314  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:12.116956  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:14.616273  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:17.116371  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:19.116501  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:21.116689  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:23.116818  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:25.617234  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:28.117036  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:30.617226  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:33.116469  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:35.616777  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:37.617262  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:40.117449  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:42.117831  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:44.616287  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:46.618306  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:49.116723  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:51.616229  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:53.617820  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:56.116943  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:58.616333  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:00.616873  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:02.617011  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:05.117447  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:07.616106  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:09.616804  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:12.124337  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:14.616125  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:16.617016  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:19.118269  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:21.616189  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:23.617124  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:26.116836  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:28.117058  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:30.117374  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:32.618970  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:35.116227  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:37.117008  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:39.616965  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:42.116851  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:44.618213  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:47.116222  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:49.616933  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:52.116850  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:54.616756  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:57.116793  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:59.616644  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:02.116080  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:04.116718  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:06.618437  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:09.116036  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:11.116546  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:13.616999  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:16.117083  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:18.616365  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:20.616664  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:22.617250  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:25.116824  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:27.116961  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:29.616385  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:32.116865  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:34.616343  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:36.616981  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:39.117055  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:41.616357  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:43.616462  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:45.616976  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:48.117111  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:50.616999  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:53.115913  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:55.116281  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:57.616365  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:59.616778  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:02.116803  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:04.615843  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:06.616292  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:08.617646  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:11.116723  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:13.116830  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:15.616517  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:18.116690  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:20.616314  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:23.116309  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:25.116508  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:27.117035  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:29.617437  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:32.116146  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:34.116964  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:36.616844  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:39.115867  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:41.116493  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:43.616383  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:45.617047  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:48.116809  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:50.617022  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:53.116939  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:55.615892  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:57.616280  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:38:00.116339  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	I1115 09:38:01.705542  428896 pod_ready.go:86] duration metric: took 3m56.595425039s for pod "kube-proxy-4j6b5" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 09:38:01.705579  428896 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1115 09:38:01.705595  428896 pod_ready.go:40] duration metric: took 4m0.000371267s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:38:01.707088  428896 out.go:203] 
	W1115 09:38:01.708237  428896 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1115 09:38:01.709353  428896 out.go:203] 
	
	
	==> CRI-O <==
	Nov 15 09:31:58 ha-577290 crio[579]: time="2025-11-15T09:31:58.166448503Z" level=info msg="Starting container: cee33caab4e63c53b0f16030d6b7e5ed117b6d8deb336214e6325e4c21565d5d" id=c1b92e8f-c9f4-4c82-a41e-6504366337f3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:31:58 ha-577290 crio[579]: time="2025-11-15T09:31:58.170004607Z" level=info msg="Started container" PID=1093 containerID=cee33caab4e63c53b0f16030d6b7e5ed117b6d8deb336214e6325e4c21565d5d description=kube-system/kube-proxy-zkk5v/kube-proxy id=c1b92e8f-c9f4-4c82-a41e-6504366337f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=14039ab33aafde8013836c1fd46872278f5297798f6c07d283d68a97ea4583f7
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.627923874Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.632195153Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.632221855Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.632240686Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.636286833Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.636323988Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.636345278Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.640494776Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.640533607Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.640558413Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.644494386Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.644530081Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.888148988Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=98eacdea-1a37-499f-8909-be6da1da2735 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.889164748Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0ea4584e-7de3-4971-b7a9-982693ba6272 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.890416359Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=41502b7a-3213-48de-9ecf-63187b27ee99 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.890583199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.896320599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.896586136Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/32262e33a9ace53eeef0ce8cec406ff2f8080ce5fcc81622a4d5a449e4254a8a/merged/etc/passwd: no such file or directory"
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.89662929Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/32262e33a9ace53eeef0ce8cec406ff2f8080ce5fcc81622a4d5a449e4254a8a/merged/etc/group: no such file or directory"
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.896965362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.924907138Z" level=info msg="Created container 15f99c8b7c3ec74fa6cd3825acae110d7aaa10d4ae4bc392f84d2694551fea64: kube-system/storage-provisioner/storage-provisioner" id=41502b7a-3213-48de-9ecf-63187b27ee99 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.925582994Z" level=info msg="Starting container: 15f99c8b7c3ec74fa6cd3825acae110d7aaa10d4ae4bc392f84d2694551fea64" id=f7d7c439-9d04-49d3-8fb2-a5cb5724fe0d name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.927550703Z" level=info msg="Started container" PID=1384 containerID=15f99c8b7c3ec74fa6cd3825acae110d7aaa10d4ae4bc392f84d2694551fea64 description=kube-system/storage-provisioner/storage-provisioner id=f7d7c439-9d04-49d3-8fb2-a5cb5724fe0d name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e2c23b2acccac36730090ee320863048bfa4890601874d8655723187b870ec5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	15f99c8b7c3ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 minutes ago       Running             storage-provisioner       1                   1e2c23b2accca       storage-provisioner                 kube-system
	af67b5a139cd4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   0                   dee6193939725       coredns-66bc5c9577-hcps6            kube-system
	6327e6dd1bf4f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   0                   0425b8d4bc632       coredns-66bc5c9577-xqpdq            kube-system
	aea52b96c3cdc       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   6 minutes ago       Running             busybox                   1                   41aaed08227fa       busybox-7b57f96db7-wzz75            default
	cee33caab4e63       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 minutes ago       Running             kube-proxy                0                   14039ab33aafd       kube-proxy-zkk5v                    kube-system
	db97b636d1fb3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Exited              storage-provisioner       0                   1e2c23b2accca       storage-provisioner                 kube-system
	35abc581515dc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 minutes ago       Running             kindnet-cni               0                   8743618d26aee       kindnet-dsj4t                       kube-system
	f33da4a57e7ab       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 minutes ago       Running             etcd                      0                   dc03af57a1b95       etcd-ha-577290                      kube-system
	6a62ffd50e27a       ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38   6 minutes ago       Running             kube-vip                  0                   e33baac547bf7       kube-vip-ha-577290                  kube-system
	98b9fc9a33f0b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   6 minutes ago       Running             kube-apiserver            0                   8ef9eeee65fdd       kube-apiserver-ha-577290            kube-system
	bf31a86759567       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   6 minutes ago       Running             kube-scheduler            0                   dabbff5016f34       kube-scheduler-ha-577290            kube-system
	aa99d93bfb488       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   6 minutes ago       Running             kube-controller-manager   0                   e6c1abddb49a1       kube-controller-manager-ha-577290   kube-system
	
	
	==> coredns [6327e6dd1bf4f46a1bf0de49d7f69cdd31bbfbeebe3c41e363eb0c978600cefc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53562 - 60738 "HINFO IN 3413401309951715269.3888521406455700014. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023013792s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [af67b5a139cd4598535eb46e6ae6be357b66b795698048e10bf4fbc158e6b4bc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51957 - 16081 "HINFO IN 3362391939732844574.4975598062171033207. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.05710445s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-577290
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-577290
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=ha-577290
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_27_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:26:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-577290
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:37:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:37:34 +0000   Sat, 15 Nov 2025 09:26:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:37:34 +0000   Sat, 15 Nov 2025 09:26:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:37:34 +0000   Sat, 15 Nov 2025 09:26:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:37:34 +0000   Sat, 15 Nov 2025 09:27:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-577290
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                17b25390-0a5d-4f6f-a9da-379a9ddec8f9
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wzz75             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m57s
	  kube-system                 coredns-66bc5c9577-hcps6             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 coredns-66bc5c9577-xqpdq             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-ha-577290                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-dsj4t                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-577290             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-577290    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-zkk5v                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-577290             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-577290                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 6m4s                   kube-proxy       
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-577290 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-577290 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-577290 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node ha-577290 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node ha-577290 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node ha-577290 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           11m                    node-controller  Node ha-577290 event: Registered Node ha-577290 in Controller
	  Normal  NodeReady                10m                    kubelet          Node ha-577290 status is now: NodeReady
	  Normal  RegisteredNode           10m                    node-controller  Node ha-577290 event: Registered Node ha-577290 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-577290 event: Registered Node ha-577290 in Controller
	  Normal  RegisteredNode           7m17s                  node-controller  Node ha-577290 event: Registered Node ha-577290 in Controller
	  Normal  Starting                 6m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m12s (x8 over 6m12s)  kubelet          Node ha-577290 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m12s (x8 over 6m12s)  kubelet          Node ha-577290 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m12s (x8 over 6m12s)  kubelet          Node ha-577290 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m3s                   node-controller  Node ha-577290 event: Registered Node ha-577290 in Controller
	  Normal  RegisteredNode           6m3s                   node-controller  Node ha-577290 event: Registered Node ha-577290 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-577290 event: Registered Node ha-577290 in Controller
	
	
	Name:               ha-577290-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-577290-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=ha-577290
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T09_27_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:27:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-577290-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:37:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:37:45 +0000   Sat, 15 Nov 2025 09:27:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:37:45 +0000   Sat, 15 Nov 2025 09:27:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:37:45 +0000   Sat, 15 Nov 2025 09:27:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:37:45 +0000   Sat, 15 Nov 2025 09:33:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-577290-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                cf61e038-9210-463f-800d-6938cf508c1f
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-n4kml                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m57s
	  kube-system                 etcd-ha-577290-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-k8kmn                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-577290-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-577290-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-4j6b5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-577290-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-577290-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   RegisteredNode           10m                    node-controller  Node ha-577290-m02 event: Registered Node ha-577290-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-577290-m02 event: Registered Node ha-577290-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-577290-m02 event: Registered Node ha-577290-m02 in Controller
	  Normal   NodeHasSufficientMemory  7m22s (x8 over 7m22s)  kubelet          Node ha-577290-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     7m22s (x8 over 7m22s)  kubelet          Node ha-577290-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m22s (x8 over 7m22s)  kubelet          Node ha-577290-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 7m22s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           7m17s                  node-controller  Node ha-577290-m02 event: Registered Node ha-577290-m02 in Controller
	  Normal   Starting                 6m10s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m10s (x8 over 6m10s)  kubelet          Node ha-577290-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m10s (x8 over 6m10s)  kubelet          Node ha-577290-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m10s (x8 over 6m10s)  kubelet          Node ha-577290-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m3s                   node-controller  Node ha-577290-m02 event: Registered Node ha-577290-m02 in Controller
	  Normal   RegisteredNode           6m3s                   node-controller  Node ha-577290-m02 event: Registered Node ha-577290-m02 in Controller
	  Warning  ContainerGCFailed        5m10s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-577290-m02 event: Registered Node ha-577290-m02 in Controller
	
	
	Name:               ha-577290-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-577290-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=ha-577290
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T09_28_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:28:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-577290-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:37:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:37:39 +0000   Sat, 15 Nov 2025 09:33:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:37:39 +0000   Sat, 15 Nov 2025 09:33:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:37:39 +0000   Sat, 15 Nov 2025 09:33:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:37:39 +0000   Sat, 15 Nov 2025 09:33:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-577290-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                8ca1cbad-ce0f-448e-9e20-5ed335d3985c
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-4h67r                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m57s
	  kube-system                 etcd-ha-577290-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m58s
	  kube-system                 kindnet-ltfl5                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m58s
	  kube-system                 kube-apiserver-ha-577290-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m58s
	  kube-system                 kube-controller-manager-ha-577290-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m58s
	  kube-system                 kube-proxy-k6gmr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	  kube-system                 kube-scheduler-ha-577290-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m58s
	  kube-system                 kube-vip-ha-577290-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m44s                  kube-proxy       
	  Normal  RegisteredNode           9m57s                  node-controller  Node ha-577290-m03 event: Registered Node ha-577290-m03 in Controller
	  Normal  RegisteredNode           9m56s                  node-controller  Node ha-577290-m03 event: Registered Node ha-577290-m03 in Controller
	  Normal  RegisteredNode           9m55s                  node-controller  Node ha-577290-m03 event: Registered Node ha-577290-m03 in Controller
	  Normal  RegisteredNode           7m17s                  node-controller  Node ha-577290-m03 event: Registered Node ha-577290-m03 in Controller
	  Normal  RegisteredNode           6m3s                   node-controller  Node ha-577290-m03 event: Registered Node ha-577290-m03 in Controller
	  Normal  RegisteredNode           6m3s                   node-controller  Node ha-577290-m03 event: Registered Node ha-577290-m03 in Controller
	  Normal  NodeNotReady             5m13s                  node-controller  Node ha-577290-m03 status is now: NodeNotReady
	  Normal  Starting                 4m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m22s (x8 over 4m22s)  kubelet          Node ha-577290-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x8 over 4m22s)  kubelet          Node ha-577290-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x8 over 4m22s)  kubelet          Node ha-577290-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-577290-m03 event: Registered Node ha-577290-m03 in Controller
	
	
	Name:               ha-577290-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-577290-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=ha-577290
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T09_29_23_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:29:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-577290-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:37:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:37:56 +0000   Sat, 15 Nov 2025 09:34:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:37:56 +0000   Sat, 15 Nov 2025 09:34:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:37:56 +0000   Sat, 15 Nov 2025 09:34:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:37:56 +0000   Sat, 15 Nov 2025 09:34:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-577290-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                217e31dc-4cef-4738-9773-fc168032cffb
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7xtwk       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m40s
	  kube-system                 kube-proxy-6mkwq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m53s                  kube-proxy       
	  Normal  Starting                 8m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m40s (x3 over 8m40s)  kubelet          Node ha-577290-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           8m40s                  node-controller  Node ha-577290-m04 event: Registered Node ha-577290-m04 in Controller
	  Normal  NodeHasSufficientPID     8m40s (x3 over 8m40s)  kubelet          Node ha-577290-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m40s (x3 over 8m40s)  kubelet          Node ha-577290-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           8m36s                  node-controller  Node ha-577290-m04 event: Registered Node ha-577290-m04 in Controller
	  Normal  RegisteredNode           8m36s                  node-controller  Node ha-577290-m04 event: Registered Node ha-577290-m04 in Controller
	  Normal  NodeReady                7m57s                  kubelet          Node ha-577290-m04 status is now: NodeReady
	  Normal  RegisteredNode           7m17s                  node-controller  Node ha-577290-m04 event: Registered Node ha-577290-m04 in Controller
	  Normal  RegisteredNode           6m3s                   node-controller  Node ha-577290-m04 event: Registered Node ha-577290-m04 in Controller
	  Normal  RegisteredNode           6m3s                   node-controller  Node ha-577290-m04 event: Registered Node ha-577290-m04 in Controller
	  Normal  NodeNotReady             5m13s                  node-controller  Node ha-577290-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-577290-m04 event: Registered Node ha-577290-m04 in Controller
	  Normal  Starting                 4m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m15s)  kubelet          Node ha-577290-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m15s)  kubelet          Node ha-577290-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x8 over 4m15s)  kubelet          Node ha-577290-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [f33da4a57e7abac3ebb4c2bb796754d89a55d77cae917a4638e1dc7bb54b55b9] <==
	{"level":"warn","ts":"2025-11-15T09:33:40.247900Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-15T09:33:40.265462Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-15T09:33:40.365484Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-15T09:33:40.464817Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-15T09:33:40.485329Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-15T09:33:40.493863Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-15T09:33:40.515481Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-15T09:33:40.523410Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-15T09:33:40.526525Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-15T09:33:40.548747Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-15T09:33:40.565101Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-15T09:33:40.566078Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-15T09:33:40.610420Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-15T09:33:40.665863Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-11-15T09:33:40.764488Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"9a2bf1be0b18fe46","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T09:33:40.764633Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"9a2bf1be0b18fe46","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-11-15T09:33:41.994373Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"9a2bf1be0b18fe46","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-15T09:33:41.994510Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9a2bf1be0b18fe46"}
	{"level":"info","ts":"2025-11-15T09:33:41.994584Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"info","ts":"2025-11-15T09:33:41.995057Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"9a2bf1be0b18fe46","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-15T09:33:41.995115Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"info","ts":"2025-11-15T09:33:42.005335Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"info","ts":"2025-11-15T09:33:42.005461Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"warn","ts":"2025-11-15T09:33:42.397124Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9a2bf1be0b18fe46","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T09:33:42.397161Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9a2bf1be0b18fe46","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	
	
	==> kernel <==
	 09:38:03 up  1:20,  0 user,  load average: 0.46, 0.91, 1.12
	Linux ha-577290 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [35abc581515dce0fd200cca6331404c3173165c3dfb1cc5aeb6f1044b505b43a] <==
	I1115 09:37:28.630246       1 main.go:301] handling current node
	I1115 09:37:38.627662       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:37:38.627694       1 main.go:301] handling current node
	I1115 09:37:38.627710       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 09:37:38.627716       1 main.go:324] Node ha-577290-m02 has CIDR [10.244.1.0/24] 
	I1115 09:37:38.627945       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1115 09:37:38.627964       1 main.go:324] Node ha-577290-m03 has CIDR [10.244.2.0/24] 
	I1115 09:37:38.628061       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 09:37:38.628070       1 main.go:324] Node ha-577290-m04 has CIDR [10.244.3.0/24] 
	I1115 09:37:48.628812       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:37:48.628852       1 main.go:301] handling current node
	I1115 09:37:48.628872       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 09:37:48.628879       1 main.go:324] Node ha-577290-m02 has CIDR [10.244.1.0/24] 
	I1115 09:37:48.629073       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1115 09:37:48.629085       1 main.go:324] Node ha-577290-m03 has CIDR [10.244.2.0/24] 
	I1115 09:37:48.629195       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 09:37:48.629204       1 main.go:324] Node ha-577290-m04 has CIDR [10.244.3.0/24] 
	I1115 09:37:58.627110       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:37:58.627140       1 main.go:301] handling current node
	I1115 09:37:58.627156       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 09:37:58.627161       1 main.go:324] Node ha-577290-m02 has CIDR [10.244.1.0/24] 
	I1115 09:37:58.627354       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1115 09:37:58.627366       1 main.go:324] Node ha-577290-m03 has CIDR [10.244.2.0/24] 
	I1115 09:37:58.627527       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 09:37:58.627542       1 main.go:324] Node ha-577290-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [98b9fc9a33f0b40586e635c881668594f59cdd960b26204a457a95a2020bd154] <==
	I1115 09:31:57.310740       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 09:31:57.310755       1 cache.go:39] Caches are synced for autoregister controller
	I1115 09:31:57.310792       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 09:31:57.311036       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 09:31:57.311382       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 09:31:57.311580       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 09:31:57.311710       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 09:31:57.311719       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 09:31:57.311945       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 09:31:57.318282       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1115 09:31:57.319906       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 09:31:57.327751       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 09:31:57.327785       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 09:31:57.327815       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 09:31:57.336835       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 09:31:57.345583       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 09:31:57.345612       1 policy_source.go:240] refreshing policies
	I1115 09:31:57.367345       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:31:57.862499       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 09:31:58.217616       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 09:32:00.980325       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 09:32:01.031732       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 09:32:01.073303       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 09:32:29.834834       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 09:32:29.848629       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [aa99d93bfb4888fbc03108f08590c503f95f20e1969eabb19d4a76ea1be94d6f] <==
	I1115 09:32:00.671851       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 09:32:00.676247       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 09:32:00.676716       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 09:32:00.676762       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:32:00.678945       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 09:32:00.679002       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 09:32:00.679070       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-577290-m04"
	I1115 09:32:00.679107       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-577290-m02"
	I1115 09:32:00.679115       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-577290"
	I1115 09:32:00.679195       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-577290-m03"
	I1115 09:32:00.679235       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 09:32:00.681589       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 09:32:00.684863       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 09:32:00.690446       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 09:32:00.708870       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 09:32:00.709008       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-577290-m04"
	I1115 09:32:00.716566       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 09:32:00.719799       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 09:32:00.720867       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 09:32:29.843480       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-985s2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-985s2\": the object has been modified; please apply your changes to the latest version and try again"
	I1115 09:32:29.843563       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c1d8f070-a8e6-4a4e-bd8c-daa4e92e5c06", APIVersion:"v1", ResourceVersion:"312", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-985s2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-985s2": the object has been modified; please apply your changes to the latest version and try again
	I1115 09:32:38.808314       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-577290-m04"
	I1115 09:32:50.688455       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	I1115 09:33:40.817972       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 09:34:01.605376       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-577290-m04"
	
	
	==> kube-proxy [cee33caab4e63c53b0f16030d6b7e5ed117b6d8deb336214e6325e4c21565d5d] <==
	I1115 09:31:58.220911       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:31:58.302196       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:31:58.403132       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:31:58.403172       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:31:58.403265       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:31:58.427987       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:31:58.428075       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:31:58.437466       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:31:58.438106       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:31:58.438156       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:31:58.440186       1 config.go:309] "Starting node config controller"
	I1115 09:31:58.440198       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:31:58.440205       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:31:58.440496       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:31:58.440506       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:31:58.440537       1 config.go:200] "Starting service config controller"
	I1115 09:31:58.440546       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:31:58.440559       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:31:58.440583       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:31:58.541578       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 09:31:58.541927       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:31:58.542004       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [bf31a867595678c370bce5d49663eec7f39f09c0ffba1367b034ab02c073ea71] <==
	I1115 09:31:52.778130       1 serving.go:386] Generated self-signed cert in-memory
	I1115 09:31:57.295411       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 09:31:57.295437       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:31:57.300344       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:31:57.300357       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 09:31:57.300378       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:31:57.300385       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 09:31:57.300429       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 09:31:57.300385       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 09:31:57.300718       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 09:31:57.300752       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 09:31:57.401581       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 09:31:57.401710       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 09:31:57.401738       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 09:31:57 ha-577290 kubelet[750]: E1115 09:31:57.406670     750 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-vip-ha-577290\" already exists" pod="kube-system/kube-vip-ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.406703     750 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: E1115 09:31:57.413764     750 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-577290\" already exists" pod="kube-system/etcd-ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.413799     750 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: E1115 09:31:57.421121     750 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-577290\" already exists" pod="kube-system/kube-apiserver-ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.438492     750 kubelet_node_status.go:124] "Node was previously registered" node="ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.438580     750 kubelet_node_status.go:78] "Successfully registered node" node="ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.438616     750 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.439452     750 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.750649     750 apiserver.go:52] "Watching apiserver"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.754485     750 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-577290" podUID="8b3a5624-ba15-4654-b2b4-c63e078af3c6"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.766414     750 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.766442     750 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.774841     750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f8477f86b3c1a1379dba41d926e4d5" path="/var/lib/kubelet/pods/93f8477f86b3c1a1379dba41d926e4d5/volumes"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.799320     750 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-577290" podUID="8b3a5624-ba15-4654-b2b4-c63e078af3c6"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.837154     750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-577290" podStartSLOduration=0.837134922 podStartE2EDuration="837.134922ms" podCreationTimestamp="2025-11-15 09:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:31:57.836889047 +0000 UTC m=+6.151661690" watchObservedRunningTime="2025-11-15 09:31:57.837134922 +0000 UTC m=+6.151907564"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.851377     750 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.858916     750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73dc267e-1872-43d0-97a0-6dfffe4327ab-lib-modules\") pod \"kindnet-dsj4t\" (UID: \"73dc267e-1872-43d0-97a0-6dfffe4327ab\") " pod="kube-system/kindnet-dsj4t"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.859052     750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57c4c9d1-9a69-4190-a1cc-0036d422972c-lib-modules\") pod \"kube-proxy-zkk5v\" (UID: \"57c4c9d1-9a69-4190-a1cc-0036d422972c\") " pod="kube-system/kube-proxy-zkk5v"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.859100     750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73dc267e-1872-43d0-97a0-6dfffe4327ab-xtables-lock\") pod \"kindnet-dsj4t\" (UID: \"73dc267e-1872-43d0-97a0-6dfffe4327ab\") " pod="kube-system/kindnet-dsj4t"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.859126     750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/73dc267e-1872-43d0-97a0-6dfffe4327ab-cni-cfg\") pod \"kindnet-dsj4t\" (UID: \"73dc267e-1872-43d0-97a0-6dfffe4327ab\") " pod="kube-system/kindnet-dsj4t"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.859180     750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c6bdc68a-8f6a-4b01-a166-66128641846b-tmp\") pod \"storage-provisioner\" (UID: \"c6bdc68a-8f6a-4b01-a166-66128641846b\") " pod="kube-system/storage-provisioner"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.859201     750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57c4c9d1-9a69-4190-a1cc-0036d422972c-xtables-lock\") pod \"kube-proxy-zkk5v\" (UID: \"57c4c9d1-9a69-4190-a1cc-0036d422972c\") " pod="kube-system/kube-proxy-zkk5v"
	Nov 15 09:32:06 ha-577290 kubelet[750]: I1115 09:32:06.508847     750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 15 09:32:28 ha-577290 kubelet[750]: I1115 09:32:28.887731     750 scope.go:117] "RemoveContainer" containerID="db97b636d1fb37a94b9cc153f99d6526bb0228407a65710988f5f94aa08f1910"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-577290 -n ha-577290
helpers_test.go:269: (dbg) Run:  kubectl --context ha-577290 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (433.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-577290" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-577290\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-577290\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-577290\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"reg
istry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticI
P\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-577290
helpers_test.go:243: (dbg) docker inspect ha-577290:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "55fd204192d284fbef9f2da2e9045f3bab36074714add4280e505121ea7188e1",
	        "Created": "2025-11-15T09:26:44.261814815Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 429099,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:31:45.502200821Z",
	            "FinishedAt": "2025-11-15T09:31:44.848068466Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/55fd204192d284fbef9f2da2e9045f3bab36074714add4280e505121ea7188e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/55fd204192d284fbef9f2da2e9045f3bab36074714add4280e505121ea7188e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/55fd204192d284fbef9f2da2e9045f3bab36074714add4280e505121ea7188e1/hosts",
	        "LogPath": "/var/lib/docker/containers/55fd204192d284fbef9f2da2e9045f3bab36074714add4280e505121ea7188e1/55fd204192d284fbef9f2da2e9045f3bab36074714add4280e505121ea7188e1-json.log",
	        "Name": "/ha-577290",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-577290:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-577290",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "55fd204192d284fbef9f2da2e9045f3bab36074714add4280e505121ea7188e1",
	                "LowerDir": "/var/lib/docker/overlay2/deaa5ca0a1e34d573faceacf362b7382f9b20153a1a4f4b48a2d020c0b752fe7-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/deaa5ca0a1e34d573faceacf362b7382f9b20153a1a4f4b48a2d020c0b752fe7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/deaa5ca0a1e34d573faceacf362b7382f9b20153a1a4f4b48a2d020c0b752fe7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/deaa5ca0a1e34d573faceacf362b7382f9b20153a1a4f4b48a2d020c0b752fe7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-577290",
	                "Source": "/var/lib/docker/volumes/ha-577290/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-577290",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-577290",
	                "name.minikube.sigs.k8s.io": "ha-577290",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "acc491fab32d2cd65172330feb24af61e80c585358abfd8158cdefa06e7c42ee",
	            "SandboxKey": "/var/run/docker/netns/acc491fab32d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ]
	            },
	            "Networks": {
	                "ha-577290": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a8fb985664d5790039e66f3c687f2a82ee3c69ad2fee979f63d3b79d803a991",
	                    "EndpointID": "3837089187f6cc16fd8cb01329916fb6aadb5ac9bc7b469563f35a001ef3675a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "0e:36:12:84:b4:30",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-577290",
	                        "55fd204192d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-577290 -n ha-577290
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-577290 logs -n 25: (1.109631554s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-577290 ssh -n ha-577290-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m02 sudo cat /home/docker/cp-test_ha-577290-m03_ha-577290-m02.txt                                        │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ cp      │ ha-577290 cp ha-577290-m03:/home/docker/cp-test.txt ha-577290-m04:/home/docker/cp-test_ha-577290-m03_ha-577290-m04.txt              │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m04 sudo cat /home/docker/cp-test_ha-577290-m03_ha-577290-m04.txt                                        │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ cp      │ ha-577290 cp testdata/cp-test.txt ha-577290-m04:/home/docker/cp-test.txt                                                            │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ cp      │ ha-577290 cp ha-577290-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile512031102/001/cp-test_ha-577290-m04.txt │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ cp      │ ha-577290 cp ha-577290-m04:/home/docker/cp-test.txt ha-577290:/home/docker/cp-test_ha-577290-m04_ha-577290.txt                      │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290 sudo cat /home/docker/cp-test_ha-577290-m04_ha-577290.txt                                                │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ cp      │ ha-577290 cp ha-577290-m04:/home/docker/cp-test.txt ha-577290-m02:/home/docker/cp-test_ha-577290-m04_ha-577290-m02.txt              │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m02 sudo cat /home/docker/cp-test_ha-577290-m04_ha-577290-m02.txt                                        │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ cp      │ ha-577290 cp ha-577290-m04:/home/docker/cp-test.txt ha-577290-m03:/home/docker/cp-test_ha-577290-m04_ha-577290-m03.txt              │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ ssh     │ ha-577290 ssh -n ha-577290-m03 sudo cat /home/docker/cp-test_ha-577290-m04_ha-577290-m03.txt                                        │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ node    │ ha-577290 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ node    │ ha-577290 node start m02 --alsologtostderr -v 5                                                                                     │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:30 UTC │
	│ node    │ ha-577290 node list --alsologtostderr -v 5                                                                                          │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │                     │
	│ stop    │ ha-577290 stop --alsologtostderr -v 5                                                                                               │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:30 UTC │ 15 Nov 25 09:31 UTC │
	│ start   │ ha-577290 start --wait true --alsologtostderr -v 5                                                                                  │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:31 UTC │                     │
	│ node    │ ha-577290 node list --alsologtostderr -v 5                                                                                          │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:38 UTC │                     │
	│ node    │ ha-577290 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-577290 │ jenkins │ v1.37.0 │ 15 Nov 25 09:38 UTC │ 15 Nov 25 09:38 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:31:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:31:45.266575  428896 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:31:45.266886  428896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:31:45.266898  428896 out.go:374] Setting ErrFile to fd 2...
	I1115 09:31:45.266902  428896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:31:45.267163  428896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:31:45.267737  428896 out.go:368] Setting JSON to false
	I1115 09:31:45.268710  428896 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4446,"bootTime":1763194659,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:31:45.268819  428896 start.go:143] virtualization: kvm guest
	I1115 09:31:45.270819  428896 out.go:179] * [ha-577290] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:31:45.272427  428896 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:31:45.272431  428896 notify.go:221] Checking for updates...
	I1115 09:31:45.274773  428896 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:31:45.276134  428896 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:31:45.277406  428896 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 09:31:45.278544  428896 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:31:45.280004  428896 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:31:45.281655  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:31:45.281802  428896 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:31:45.305468  428896 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:31:45.305577  428896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:31:45.363884  428896 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-15 09:31:45.353980004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:31:45.363994  428896 docker.go:319] overlay module found
	I1115 09:31:45.366036  428896 out.go:179] * Using the docker driver based on existing profile
	I1115 09:31:45.367327  428896 start.go:309] selected driver: docker
	I1115 09:31:45.367347  428896 start.go:930] validating driver "docker" against &{Name:ha-577290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:31:45.367524  428896 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:31:45.367608  428896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:31:45.426878  428896 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-15 09:31:45.417064116 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:31:45.427845  428896 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:31:45.427892  428896 cni.go:84] Creating CNI manager for ""
	I1115 09:31:45.427961  428896 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1115 09:31:45.428020  428896 start.go:353] cluster config:
	{Name:ha-577290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:31:45.429910  428896 out.go:179] * Starting "ha-577290" primary control-plane node in "ha-577290" cluster
	I1115 09:31:45.431277  428896 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:31:45.432779  428896 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:31:45.434027  428896 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:31:45.434081  428896 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 09:31:45.434108  428896 cache.go:65] Caching tarball of preloaded images
	I1115 09:31:45.434157  428896 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:31:45.434217  428896 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:31:45.434231  428896 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:31:45.434406  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:31:45.454978  428896 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:31:45.455002  428896 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:31:45.455026  428896 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:31:45.455057  428896 start.go:360] acquireMachinesLock for ha-577290: {Name:mk6172d84dd1d32a54848cf1d049455806d86fc7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:31:45.455126  428896 start.go:364] duration metric: took 46.262µs to acquireMachinesLock for "ha-577290"
	I1115 09:31:45.455149  428896 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:31:45.455159  428896 fix.go:54] fixHost starting: 
	I1115 09:31:45.455379  428896 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:31:45.473405  428896 fix.go:112] recreateIfNeeded on ha-577290: state=Stopped err=<nil>
	W1115 09:31:45.473441  428896 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:31:45.475321  428896 out.go:252] * Restarting existing docker container for "ha-577290" ...
	I1115 09:31:45.475413  428896 cli_runner.go:164] Run: docker start ha-577290
	I1115 09:31:45.734297  428896 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:31:45.753588  428896 kic.go:430] container "ha-577290" state is running.
	I1115 09:31:45.753944  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290
	I1115 09:31:45.772816  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:31:45.773098  428896 machine.go:94] provisionDockerMachine start ...
	I1115 09:31:45.773176  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:45.793693  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:45.793956  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1115 09:31:45.793974  428896 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:31:45.794782  428896 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46998->127.0.0.1:33184: read: connection reset by peer
	I1115 09:31:48.924615  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290
	
	I1115 09:31:48.924669  428896 ubuntu.go:182] provisioning hostname "ha-577290"
	I1115 09:31:48.924735  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:48.943068  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:48.943339  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1115 09:31:48.943354  428896 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-577290 && echo "ha-577290" | sudo tee /etc/hostname
	I1115 09:31:49.082618  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290
	
	I1115 09:31:49.082703  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:49.100574  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:49.100818  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1115 09:31:49.100842  428896 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-577290' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-577290/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-577290' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:31:49.230624  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:31:49.230659  428896 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:31:49.230707  428896 ubuntu.go:190] setting up certificates
	I1115 09:31:49.230722  428896 provision.go:84] configureAuth start
	I1115 09:31:49.230803  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290
	I1115 09:31:49.249474  428896 provision.go:143] copyHostCerts
	I1115 09:31:49.249521  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:31:49.249578  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:31:49.249598  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:31:49.249677  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:31:49.249798  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:31:49.249825  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:31:49.249835  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:31:49.249880  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:31:49.250060  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:31:49.250160  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:31:49.250181  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:31:49.250240  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:31:49.250337  428896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.ha-577290 san=[127.0.0.1 192.168.49.2 ha-577290 localhost minikube]
	I1115 09:31:49.553270  428896 provision.go:177] copyRemoteCerts
	I1115 09:31:49.553355  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:31:49.553408  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:49.571907  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:31:49.667671  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 09:31:49.667749  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:31:49.687153  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 09:31:49.687230  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1115 09:31:49.705517  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 09:31:49.705588  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 09:31:49.723853  428896 provision.go:87] duration metric: took 493.11187ms to configureAuth
	I1115 09:31:49.723888  428896 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:31:49.724092  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:31:49.724201  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:49.742818  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:49.743043  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1115 09:31:49.743057  428896 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:31:50.033292  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:31:50.033324  428896 machine.go:97] duration metric: took 4.26020713s to provisionDockerMachine
	I1115 09:31:50.033341  428896 start.go:293] postStartSetup for "ha-577290" (driver="docker")
	I1115 09:31:50.033354  428896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:31:50.033471  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:31:50.033538  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:50.054075  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:31:50.149459  428896 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:31:50.153204  428896 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:31:50.153244  428896 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:31:50.153258  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:31:50.153313  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:31:50.153436  428896 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:31:50.153459  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /etc/ssl/certs/3590632.pem
	I1115 09:31:50.153592  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:31:50.161899  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:31:50.180230  428896 start.go:296] duration metric: took 146.870031ms for postStartSetup
	I1115 09:31:50.180319  428896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:31:50.180381  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:50.199337  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:31:50.290830  428896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:31:50.295656  428896 fix.go:56] duration metric: took 4.840490237s for fixHost
	I1115 09:31:50.295688  428896 start.go:83] releasing machines lock for "ha-577290", held for 4.840547311s
	I1115 09:31:50.295776  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290
	I1115 09:31:50.314561  428896 ssh_runner.go:195] Run: cat /version.json
	I1115 09:31:50.314634  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:50.314640  428896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:31:50.314706  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:31:50.333494  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:31:50.333615  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:31:50.480680  428896 ssh_runner.go:195] Run: systemctl --version
	I1115 09:31:50.487312  428896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:31:50.522567  428896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:31:50.527574  428896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:31:50.527668  428896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:31:50.536442  428896 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 09:31:50.536471  428896 start.go:496] detecting cgroup driver to use...
	I1115 09:31:50.536510  428896 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:31:50.536562  428896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:31:50.552643  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:31:50.565682  428896 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:31:50.565732  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:31:50.579797  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:31:50.592607  428896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:31:50.674494  428896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:31:50.753757  428896 docker.go:234] disabling docker service ...
	I1115 09:31:50.753838  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:31:50.768880  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:31:50.781446  428896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:31:50.862035  428896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:31:50.941863  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:31:50.955003  428896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:31:50.969531  428896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:31:50.969630  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:50.978678  428896 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:31:50.978767  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:50.987922  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:50.997554  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:51.006963  428896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:31:51.015699  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:51.024835  428896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:51.033468  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:51.042627  428896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:31:51.050076  428896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:31:51.057319  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:31:51.138979  428896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:31:51.250267  428896 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:31:51.250325  428896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:31:51.254431  428896 start.go:564] Will wait 60s for crictl version
	I1115 09:31:51.254482  428896 ssh_runner.go:195] Run: which crictl
	I1115 09:31:51.258072  428896 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:31:51.283265  428896 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:31:51.283331  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:31:51.311792  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:31:51.341627  428896 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:31:51.342956  428896 cli_runner.go:164] Run: docker network inspect ha-577290 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:31:51.361359  428896 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:31:51.365628  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:31:51.376129  428896 kubeadm.go:884] updating cluster {Name:ha-577290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:31:51.376278  428896 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:31:51.376328  428896 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:31:51.411138  428896 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:31:51.411158  428896 crio.go:433] Images already preloaded, skipping extraction
	I1115 09:31:51.411201  428896 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:31:51.438061  428896 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:31:51.438086  428896 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:31:51.438095  428896 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1115 09:31:51.438206  428896 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-577290 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:31:51.438283  428896 ssh_runner.go:195] Run: crio config
	I1115 09:31:51.486595  428896 cni.go:84] Creating CNI manager for ""
	I1115 09:31:51.486621  428896 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1115 09:31:51.486644  428896 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:31:51.486670  428896 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-577290 NodeName:ha-577290 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:31:51.486829  428896 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-577290"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:31:51.486855  428896 kube-vip.go:115] generating kube-vip config ...
	I1115 09:31:51.486908  428896 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 09:31:51.499329  428896 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:31:51.499466  428896 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 09:31:51.499536  428896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:31:51.507665  428896 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:31:51.507743  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1115 09:31:51.516035  428896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1115 09:31:51.528543  428896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:31:51.540903  428896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1115 09:31:51.553425  428896 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 09:31:51.566186  428896 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 09:31:51.569903  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:31:51.579760  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:31:51.657522  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:31:51.682929  428896 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290 for IP: 192.168.49.2
	I1115 09:31:51.682962  428896 certs.go:195] generating shared ca certs ...
	I1115 09:31:51.682984  428896 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:51.683252  428896 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:31:51.683303  428896 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:31:51.683316  428896 certs.go:257] generating profile certs ...
	I1115 09:31:51.683414  428896 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key
	I1115 09:31:51.683438  428896 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.7b879ecd
	I1115 09:31:51.683459  428896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt.7b879ecd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1115 09:31:51.902645  428896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt.7b879ecd ...
	I1115 09:31:51.902677  428896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt.7b879ecd: {Name:mk31504058a71e0f7602a819b395f2dc874b4f06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:51.902882  428896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.7b879ecd ...
	I1115 09:31:51.902903  428896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.7b879ecd: {Name:mk62d65624b9927bec45ce4edc59d90214e67d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:51.903010  428896 certs.go:382] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt.7b879ecd -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt
	I1115 09:31:51.903152  428896 certs.go:386] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.7b879ecd -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key
	I1115 09:31:51.903287  428896 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key
	I1115 09:31:51.903304  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 09:31:51.903316  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 09:31:51.903328  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 09:31:51.903338  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 09:31:51.903350  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 09:31:51.903360  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 09:31:51.903371  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 09:31:51.903381  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 09:31:51.903453  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:31:51.903493  428896 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:31:51.903503  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:31:51.903523  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:31:51.903545  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:31:51.903572  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:31:51.903616  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:31:51.903642  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:31:51.903656  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem -> /usr/share/ca-certificates/359063.pem
	I1115 09:31:51.903668  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /usr/share/ca-certificates/3590632.pem
	I1115 09:31:51.904202  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:31:51.923549  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:31:51.941100  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:31:51.959534  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:31:51.977478  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 09:31:51.995833  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:31:52.013950  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:31:52.032035  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 09:31:52.049984  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:31:52.068640  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:31:52.087500  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:31:52.105266  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:31:52.118376  428896 ssh_runner.go:195] Run: openssl version
	I1115 09:31:52.124566  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:31:52.133079  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:31:52.137009  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:31:52.137067  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:31:52.171540  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:31:52.180359  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:31:52.191734  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:31:52.197586  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:31:52.197656  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:31:52.238367  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:31:52.248045  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:31:52.257259  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:31:52.262431  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:31:52.262498  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:31:52.310780  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:31:52.321838  428896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:31:52.327131  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 09:31:52.384824  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 09:31:52.420556  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 09:31:52.456174  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 09:31:52.492992  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 09:31:52.527605  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 09:31:52.563847  428896 kubeadm.go:401] StartCluster: {Name:ha-577290 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:31:52.564002  428896 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:31:52.564061  428896 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:31:52.598315  428896 cri.go:89] found id: "f33da4a57e7abac3ebb4c2bb796754d89a55d77cae917a4638e1dc7bb54b55b9"
	I1115 09:31:52.598342  428896 cri.go:89] found id: "6a62ffd50e27a5d8290e1041b339ee1c4011f892ee0b67e96eca3abce2936268"
	I1115 09:31:52.598346  428896 cri.go:89] found id: "98b9fc9a33f0b40586e635c881668594f59cdd960b26204a457a95a2020bd154"
	I1115 09:31:52.598352  428896 cri.go:89] found id: "bf31a867595678c370bce5d49663eec7f39f09c0ffba1367b034ab02c073ea71"
	I1115 09:31:52.598356  428896 cri.go:89] found id: "aa99d93bfb4888fbc03108f08590c503f95f20e1969eabb19d4a76ea1be94d6f"
	I1115 09:31:52.598361  428896 cri.go:89] found id: ""
	I1115 09:31:52.598433  428896 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 09:31:52.610898  428896 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:31:52Z" level=error msg="open /run/runc: no such file or directory"
	I1115 09:31:52.610984  428896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:31:52.619008  428896 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 09:31:52.619032  428896 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 09:31:52.619095  428896 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 09:31:52.626928  428896 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:31:52.627429  428896 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-577290" does not appear in /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:31:52.627702  428896 kubeconfig.go:62] /home/jenkins/minikube-integration/21895-355485/kubeconfig needs updating (will repair): [kubeconfig missing "ha-577290" cluster setting kubeconfig missing "ha-577290" context setting]
	I1115 09:31:52.628120  428896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:52.628857  428896 kapi.go:59] client config for ha-577290: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 09:31:52.629429  428896 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 09:31:52.629443  428896 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1115 09:31:52.629457  428896 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 09:31:52.629464  428896 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 09:31:52.629469  428896 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 09:31:52.629474  428896 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 09:31:52.629935  428896 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 09:31:52.638596  428896 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1115 09:31:52.638622  428896 kubeadm.go:602] duration metric: took 19.583961ms to restartPrimaryControlPlane
	I1115 09:31:52.638632  428896 kubeadm.go:403] duration metric: took 74.798878ms to StartCluster
	I1115 09:31:52.638659  428896 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:52.638739  428896 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:31:52.639509  428896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:31:52.639770  428896 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:31:52.639796  428896 start.go:242] waiting for startup goroutines ...
	I1115 09:31:52.639817  428896 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 09:31:52.640075  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:31:52.642696  428896 out.go:179] * Enabled addons: 
	I1115 09:31:52.643939  428896 addons.go:515] duration metric: took 4.127185ms for enable addons: enabled=[]
	I1115 09:31:52.643981  428896 start.go:247] waiting for cluster config update ...
	I1115 09:31:52.643992  428896 start.go:256] writing updated cluster config ...
	I1115 09:31:52.645418  428896 out.go:203] 
	I1115 09:31:52.646875  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:31:52.646991  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:31:52.648625  428896 out.go:179] * Starting "ha-577290-m02" control-plane node in "ha-577290" cluster
	I1115 09:31:52.649693  428896 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:31:52.651012  428896 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:31:52.652316  428896 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:31:52.652334  428896 cache.go:65] Caching tarball of preloaded images
	I1115 09:31:52.652420  428896 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:31:52.652479  428896 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:31:52.652496  428896 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:31:52.652639  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:31:52.677157  428896 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:31:52.677183  428896 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:31:52.677206  428896 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:31:52.677237  428896 start.go:360] acquireMachinesLock for ha-577290-m02: {Name:mkf112ea76ada558a569f224e46caac6b694e64c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:31:52.677308  428896 start.go:364] duration metric: took 49.241µs to acquireMachinesLock for "ha-577290-m02"
	I1115 09:31:52.677330  428896 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:31:52.677340  428896 fix.go:54] fixHost starting: m02
	I1115 09:31:52.677664  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m02 --format={{.State.Status}}
	I1115 09:31:52.698576  428896 fix.go:112] recreateIfNeeded on ha-577290-m02: state=Stopped err=<nil>
	W1115 09:31:52.698609  428896 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:31:52.700325  428896 out.go:252] * Restarting existing docker container for "ha-577290-m02" ...
	I1115 09:31:52.700427  428896 cli_runner.go:164] Run: docker start ha-577290-m02
	I1115 09:31:53.006147  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m02 --format={{.State.Status}}
	I1115 09:31:53.028889  428896 kic.go:430] container "ha-577290-m02" state is running.
	I1115 09:31:53.029347  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m02
	I1115 09:31:53.051018  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:31:53.051301  428896 machine.go:94] provisionDockerMachine start ...
	I1115 09:31:53.051366  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:53.074164  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:53.074499  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1115 09:31:53.074516  428896 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:31:53.075211  428896 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57138->127.0.0.1:33189: read: connection reset by peer
	I1115 09:31:56.207665  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m02
	
	I1115 09:31:56.207697  428896 ubuntu.go:182] provisioning hostname "ha-577290-m02"
	I1115 09:31:56.207780  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:56.232566  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:56.232897  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1115 09:31:56.232924  428896 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-577290-m02 && echo "ha-577290-m02" | sudo tee /etc/hostname
	I1115 09:31:56.391849  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m02
	
	I1115 09:31:56.391935  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:56.414665  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:56.414967  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1115 09:31:56.414995  428896 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-577290-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-577290-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-577290-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:31:56.561504  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:31:56.561540  428896 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:31:56.561563  428896 ubuntu.go:190] setting up certificates
	I1115 09:31:56.561579  428896 provision.go:84] configureAuth start
	I1115 09:31:56.561651  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m02
	I1115 09:31:56.584955  428896 provision.go:143] copyHostCerts
	I1115 09:31:56.584995  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:31:56.585033  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:31:56.585051  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:31:56.585145  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:31:56.585258  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:31:56.585290  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:31:56.585298  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:31:56.585343  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:31:56.585423  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:31:56.585444  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:31:56.585450  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:31:56.585488  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:31:56.585575  428896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.ha-577290-m02 san=[127.0.0.1 192.168.49.3 ha-577290-m02 localhost minikube]
	I1115 09:31:56.824747  428896 provision.go:177] copyRemoteCerts
	I1115 09:31:56.824826  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:31:56.824877  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:56.850475  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m02/id_rsa Username:docker}
	I1115 09:31:56.951132  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 09:31:56.951210  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:31:56.977882  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 09:31:56.977954  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:31:56.997077  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 09:31:56.997147  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1115 09:31:57.016347  428896 provision.go:87] duration metric: took 454.750366ms to configureAuth
	I1115 09:31:57.016381  428896 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:31:57.016674  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:31:57.016833  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:57.052679  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:31:57.053005  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1115 09:31:57.053029  428896 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:31:57.426092  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:31:57.426126  428896 machine.go:97] duration metric: took 4.374809168s to provisionDockerMachine
	I1115 09:31:57.426140  428896 start.go:293] postStartSetup for "ha-577290-m02" (driver="docker")
	I1115 09:31:57.426151  428896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:31:57.426220  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:31:57.426262  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:57.448519  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m02/id_rsa Username:docker}
	I1115 09:31:57.545209  428896 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:31:57.549384  428896 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:31:57.549439  428896 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:31:57.549452  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:31:57.549519  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:31:57.549596  428896 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:31:57.549608  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /etc/ssl/certs/3590632.pem
	I1115 09:31:57.549687  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:31:57.558189  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:31:57.580235  428896 start.go:296] duration metric: took 154.07621ms for postStartSetup
	I1115 09:31:57.580333  428896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:31:57.580386  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:57.603433  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m02/id_rsa Username:docker}
	I1115 09:31:57.701219  428896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:31:57.706336  428896 fix.go:56] duration metric: took 5.028989139s for fixHost
	I1115 09:31:57.706368  428896 start.go:83] releasing machines lock for "ha-577290-m02", held for 5.029048241s
	I1115 09:31:57.706470  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m02
	I1115 09:31:57.727402  428896 out.go:179] * Found network options:
	I1115 09:31:57.728724  428896 out.go:179]   - NO_PROXY=192.168.49.2
	W1115 09:31:57.729967  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:31:57.730005  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 09:31:57.730073  428896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:31:57.730128  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:57.730159  428896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:31:57.730230  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m02
	I1115 09:31:57.748817  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m02/id_rsa Username:docker}
	I1115 09:31:57.750362  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m02/id_rsa Username:docker}
	I1115 09:31:57.903068  428896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:31:57.937805  428896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:31:57.937874  428896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:31:57.947024  428896 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 09:31:57.947053  428896 start.go:496] detecting cgroup driver to use...
	I1115 09:31:57.947136  428896 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:31:57.947208  428896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:31:57.963666  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:31:57.976613  428896 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:31:57.976675  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:31:57.991891  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:31:58.006003  428896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:31:58.153545  428896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:31:58.310509  428896 docker.go:234] disabling docker service ...
	I1115 09:31:58.310582  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:31:58.330775  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:31:58.348091  428896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:31:58.501312  428896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:31:58.629095  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:31:58.643176  428896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:31:58.658526  428896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:31:58.658590  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.668426  428896 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:31:58.668483  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.679145  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.689023  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.698596  428896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:31:58.707252  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.717022  428896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.726715  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:31:58.735906  428896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:31:58.743685  428896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:31:58.751568  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:31:58.887672  428896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:33:29.141191  428896 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.253455227s)
	I1115 09:33:29.141240  428896 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:33:29.141300  428896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:33:29.145595  428896 start.go:564] Will wait 60s for crictl version
	I1115 09:33:29.145655  428896 ssh_runner.go:195] Run: which crictl
	I1115 09:33:29.149342  428896 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:33:29.174182  428896 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:33:29.174254  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:29.204881  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:29.236181  428896 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:33:29.237785  428896 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 09:33:29.239150  428896 cli_runner.go:164] Run: docker network inspect ha-577290 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:33:29.257605  428896 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:33:29.262168  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:29.273241  428896 mustload.go:66] Loading cluster: ha-577290
	I1115 09:33:29.273540  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:29.273770  428896 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:33:29.291615  428896 host.go:66] Checking if "ha-577290" exists ...
	I1115 09:33:29.291888  428896 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290 for IP: 192.168.49.3
	I1115 09:33:29.291900  428896 certs.go:195] generating shared ca certs ...
	I1115 09:33:29.291916  428896 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:29.292078  428896 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:33:29.292119  428896 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:33:29.292129  428896 certs.go:257] generating profile certs ...
	I1115 09:33:29.292200  428896 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key
	I1115 09:33:29.292255  428896 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.c5636f69
	I1115 09:33:29.292289  428896 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key
	I1115 09:33:29.292300  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 09:33:29.292314  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 09:33:29.292326  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 09:33:29.292338  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 09:33:29.292352  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 09:33:29.292367  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 09:33:29.292387  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 09:33:29.292421  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 09:33:29.292481  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:33:29.292511  428896 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:33:29.292522  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:33:29.292544  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:33:29.292568  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:33:29.292596  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:33:29.292645  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:33:29.292674  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:29.292685  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem -> /usr/share/ca-certificates/359063.pem
	I1115 09:33:29.292705  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /usr/share/ca-certificates/3590632.pem
	I1115 09:33:29.292756  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:33:29.311158  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:33:29.397746  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1115 09:33:29.402107  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1115 09:33:29.410807  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1115 09:33:29.414570  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1115 09:33:29.423209  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1115 09:33:29.426969  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1115 09:33:29.435369  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1115 09:33:29.439110  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1115 09:33:29.447938  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1115 09:33:29.451581  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1115 09:33:29.460040  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1115 09:33:29.463847  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1115 09:33:29.472802  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:33:29.491640  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:33:29.509789  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:33:29.527041  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:33:29.544384  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 09:33:29.562153  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:33:29.580258  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:33:29.598677  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 09:33:29.616730  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:33:29.635496  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:33:29.653811  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:33:29.671993  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1115 09:33:29.684693  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1115 09:33:29.697982  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1115 09:33:29.710750  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1115 09:33:29.723405  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1115 09:33:29.735786  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1115 09:33:29.748861  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1115 09:33:29.761801  428896 ssh_runner.go:195] Run: openssl version
	I1115 09:33:29.768042  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:33:29.777574  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:33:29.781659  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:33:29.781740  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:33:29.817272  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:33:29.826567  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:33:29.836067  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:29.839987  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:29.840045  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:29.875123  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:33:29.884911  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:33:29.893650  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:33:29.897547  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:33:29.897614  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:33:29.933220  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:33:29.942015  428896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:33:29.946107  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 09:33:29.981924  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 09:33:30.017346  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 09:33:30.055728  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 09:33:30.091801  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 09:33:30.128083  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 09:33:30.165477  428896 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1115 09:33:30.165602  428896 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-577290-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:33:30.165633  428896 kube-vip.go:115] generating kube-vip config ...
	I1115 09:33:30.165686  428896 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 09:33:30.178477  428896 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:33:30.178550  428896 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 09:33:30.178626  428896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:33:30.187181  428896 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:33:30.187255  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1115 09:33:30.195966  428896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:33:30.209403  428896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:33:30.222151  428896 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 09:33:30.235250  428896 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 09:33:30.239303  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:30.249724  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:30.355117  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:30.368971  428896 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:33:30.369229  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:30.370723  428896 out.go:179] * Verifying Kubernetes components...
	I1115 09:33:30.372269  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:30.476752  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:30.491166  428896 kapi.go:59] client config for ha-577290: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 09:33:30.491243  428896 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 09:33:30.491612  428896 node_ready.go:35] waiting up to 6m0s for node "ha-577290-m02" to be "Ready" ...
	W1115 09:33:32.494974  428896 node_ready.go:57] node "ha-577290-m02" has "Ready":"False" status (will retry)
	W1115 09:33:34.495865  428896 node_ready.go:57] node "ha-577290-m02" has "Ready":"False" status (will retry)
	W1115 09:33:36.995901  428896 node_ready.go:57] node "ha-577290-m02" has "Ready":"False" status (will retry)
	W1115 09:33:39.495623  428896 node_ready.go:57] node "ha-577290-m02" has "Ready":"False" status (will retry)
	I1115 09:33:40.495728  428896 node_ready.go:49] node "ha-577290-m02" is "Ready"
	I1115 09:33:40.495762  428896 node_ready.go:38] duration metric: took 10.004119226s for node "ha-577290-m02" to be "Ready" ...
	I1115 09:33:40.495779  428896 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:33:40.495830  428896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:33:40.508005  428896 api_server.go:72] duration metric: took 10.138962389s to wait for apiserver process to appear ...
	I1115 09:33:40.508034  428896 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:33:40.508058  428896 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 09:33:40.513137  428896 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 09:33:40.514147  428896 api_server.go:141] control plane version: v1.34.1
	I1115 09:33:40.514171  428896 api_server.go:131] duration metric: took 6.130383ms to wait for apiserver health ...
	I1115 09:33:40.514180  428896 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:33:40.521806  428896 system_pods.go:59] 26 kube-system pods found
	I1115 09:33:40.521847  428896 system_pods.go:61] "coredns-66bc5c9577-hcps6" [61783521-de69-4669-874c-b0a260551902] Running
	I1115 09:33:40.521853  428896 system_pods.go:61] "coredns-66bc5c9577-xqpdq" [929b4b9a-8741-413f-939e-68c92781b1eb] Running
	I1115 09:33:40.521857  428896 system_pods.go:61] "etcd-ha-577290" [3ab153af-3774-4da4-a72e-323d14056944] Running
	I1115 09:33:40.521860  428896 system_pods.go:61] "etcd-ha-577290-m02" [146e26b0-996a-4cf6-a1ac-4e50fc799d1e] Running
	I1115 09:33:40.521865  428896 system_pods.go:61] "etcd-ha-577290-m03" [c61afa72-7aa1-42b1-9844-ae2295e52813] Running
	I1115 09:33:40.521868  428896 system_pods.go:61] "kindnet-7xtwk" [82d2cc3a-bb9c-4fdd-8975-8c804cc2c4d3] Running
	I1115 09:33:40.521871  428896 system_pods.go:61] "kindnet-dsj4t" [73dc267e-1872-43d0-97a0-6dfffe4327ab] Running
	I1115 09:33:40.521877  428896 system_pods.go:61] "kindnet-k8kmn" [350338b0-7cd1-4a6e-8608-b9b16b4a5cac] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 09:33:40.521888  428896 system_pods.go:61] "kindnet-ltfl5" [d3873196-930a-44bb-87f0-684c93025bdc] Running
	I1115 09:33:40.521903  428896 system_pods.go:61] "kube-apiserver-ha-577290" [a23f028c-3c3b-4b50-a859-2624a47cf37e] Running
	I1115 09:33:40.521907  428896 system_pods.go:61] "kube-apiserver-ha-577290-m02" [d6fb6ef6-4266-45e7-93c3-76c5ff31c0c5] Running
	I1115 09:33:40.521910  428896 system_pods.go:61] "kube-apiserver-ha-577290-m03" [23b73095-c581-4178-be4c-26dd08f8d4dc] Running
	I1115 09:33:40.521913  428896 system_pods.go:61] "kube-controller-manager-ha-577290" [f28c8e92-79ec-45ba-87a1-f07151431d5c] Running
	I1115 09:33:40.521917  428896 system_pods.go:61] "kube-controller-manager-ha-577290-m02" [8daa249c-7866-4ad3-bd2f-aa94ef222eb7] Running
	I1115 09:33:40.521922  428896 system_pods.go:61] "kube-controller-manager-ha-577290-m03" [53c1116a-ca9c-4f6a-a317-2159d25ae09c] Running
	I1115 09:33:40.521926  428896 system_pods.go:61] "kube-proxy-4j6b5" [67899ff8-aa1a-41d8-b7a3-4fea91a10fa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 09:33:40.521929  428896 system_pods.go:61] "kube-proxy-6mkwq" [e2ddd593-d255-4f3d-b008-72b920167540] Running
	I1115 09:33:40.521932  428896 system_pods.go:61] "kube-proxy-k6gmr" [9f25b23c-212d-4987-9d75-335a513ad8c2] Running
	I1115 09:33:40.521935  428896 system_pods.go:61] "kube-proxy-zkk5v" [57c4c9d1-9a69-4190-a1cc-0036d422972c] Running
	I1115 09:33:40.521938  428896 system_pods.go:61] "kube-scheduler-ha-577290" [09b6d338-2eb4-469c-ae21-a8e58b9c4622] Running
	I1115 09:33:40.521941  428896 system_pods.go:61] "kube-scheduler-ha-577290-m02" [7b3d6e56-319c-492f-8197-fb4c6c883fed] Running
	I1115 09:33:40.521943  428896 system_pods.go:61] "kube-scheduler-ha-577290-m03" [6d9b1eb9-2fa8-4bd5-b0a2-fa1b45c93b7e] Running
	I1115 09:33:40.521947  428896 system_pods.go:61] "kube-vip-ha-577290" [b451c58a-b25d-4697-b9c5-7e2fc03cea67] Running
	I1115 09:33:40.521951  428896 system_pods.go:61] "kube-vip-ha-577290-m02" [057ddd08-41fa-4738-a72c-a91a4e004fb1] Running
	I1115 09:33:40.521953  428896 system_pods.go:61] "kube-vip-ha-577290-m03" [7aaee1aa-2771-45e7-b0af-5c28f8c8a227] Running
	I1115 09:33:40.521956  428896 system_pods.go:61] "storage-provisioner" [c6bdc68a-8f6a-4b01-a166-66128641846b] Running
	I1115 09:33:40.521962  428896 system_pods.go:74] duration metric: took 7.776979ms to wait for pod list to return data ...
	I1115 09:33:40.521973  428896 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:33:40.525281  428896 default_sa.go:45] found service account: "default"
	I1115 09:33:40.525304  428896 default_sa.go:55] duration metric: took 3.325885ms for default service account to be created ...
	I1115 09:33:40.525314  428896 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:33:40.532899  428896 system_pods.go:86] 26 kube-system pods found
	I1115 09:33:40.532942  428896 system_pods.go:89] "coredns-66bc5c9577-hcps6" [61783521-de69-4669-874c-b0a260551902] Running
	I1115 09:33:40.532948  428896 system_pods.go:89] "coredns-66bc5c9577-xqpdq" [929b4b9a-8741-413f-939e-68c92781b1eb] Running
	I1115 09:33:40.532952  428896 system_pods.go:89] "etcd-ha-577290" [3ab153af-3774-4da4-a72e-323d14056944] Running
	I1115 09:33:40.532955  428896 system_pods.go:89] "etcd-ha-577290-m02" [146e26b0-996a-4cf6-a1ac-4e50fc799d1e] Running
	I1115 09:33:40.532958  428896 system_pods.go:89] "etcd-ha-577290-m03" [c61afa72-7aa1-42b1-9844-ae2295e52813] Running
	I1115 09:33:40.532962  428896 system_pods.go:89] "kindnet-7xtwk" [82d2cc3a-bb9c-4fdd-8975-8c804cc2c4d3] Running
	I1115 09:33:40.532965  428896 system_pods.go:89] "kindnet-dsj4t" [73dc267e-1872-43d0-97a0-6dfffe4327ab] Running
	I1115 09:33:40.532972  428896 system_pods.go:89] "kindnet-k8kmn" [350338b0-7cd1-4a6e-8608-b9b16b4a5cac] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 09:33:40.532980  428896 system_pods.go:89] "kindnet-ltfl5" [d3873196-930a-44bb-87f0-684c93025bdc] Running
	I1115 09:33:40.532985  428896 system_pods.go:89] "kube-apiserver-ha-577290" [a23f028c-3c3b-4b50-a859-2624a47cf37e] Running
	I1115 09:33:40.532988  428896 system_pods.go:89] "kube-apiserver-ha-577290-m02" [d6fb6ef6-4266-45e7-93c3-76c5ff31c0c5] Running
	I1115 09:33:40.532991  428896 system_pods.go:89] "kube-apiserver-ha-577290-m03" [23b73095-c581-4178-be4c-26dd08f8d4dc] Running
	I1115 09:33:40.532997  428896 system_pods.go:89] "kube-controller-manager-ha-577290" [f28c8e92-79ec-45ba-87a1-f07151431d5c] Running
	I1115 09:33:40.533001  428896 system_pods.go:89] "kube-controller-manager-ha-577290-m02" [8daa249c-7866-4ad3-bd2f-aa94ef222eb7] Running
	I1115 09:33:40.533007  428896 system_pods.go:89] "kube-controller-manager-ha-577290-m03" [53c1116a-ca9c-4f6a-a317-2159d25ae09c] Running
	I1115 09:33:40.533012  428896 system_pods.go:89] "kube-proxy-4j6b5" [67899ff8-aa1a-41d8-b7a3-4fea91a10fa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 09:33:40.533018  428896 system_pods.go:89] "kube-proxy-6mkwq" [e2ddd593-d255-4f3d-b008-72b920167540] Running
	I1115 09:33:40.533022  428896 system_pods.go:89] "kube-proxy-k6gmr" [9f25b23c-212d-4987-9d75-335a513ad8c2] Running
	I1115 09:33:40.533027  428896 system_pods.go:89] "kube-proxy-zkk5v" [57c4c9d1-9a69-4190-a1cc-0036d422972c] Running
	I1115 09:33:40.533030  428896 system_pods.go:89] "kube-scheduler-ha-577290" [09b6d338-2eb4-469c-ae21-a8e58b9c4622] Running
	I1115 09:33:40.533033  428896 system_pods.go:89] "kube-scheduler-ha-577290-m02" [7b3d6e56-319c-492f-8197-fb4c6c883fed] Running
	I1115 09:33:40.533036  428896 system_pods.go:89] "kube-scheduler-ha-577290-m03" [6d9b1eb9-2fa8-4bd5-b0a2-fa1b45c93b7e] Running
	I1115 09:33:40.533039  428896 system_pods.go:89] "kube-vip-ha-577290" [b451c58a-b25d-4697-b9c5-7e2fc03cea67] Running
	I1115 09:33:40.533042  428896 system_pods.go:89] "kube-vip-ha-577290-m02" [057ddd08-41fa-4738-a72c-a91a4e004fb1] Running
	I1115 09:33:40.533047  428896 system_pods.go:89] "kube-vip-ha-577290-m03" [7aaee1aa-2771-45e7-b0af-5c28f8c8a227] Running
	I1115 09:33:40.533052  428896 system_pods.go:89] "storage-provisioner" [c6bdc68a-8f6a-4b01-a166-66128641846b] Running
	I1115 09:33:40.533059  428896 system_pods.go:126] duration metric: took 7.740388ms to wait for k8s-apps to be running ...
	I1115 09:33:40.533069  428896 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:33:40.533115  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:33:40.546948  428896 system_svc.go:56] duration metric: took 13.851414ms WaitForService to wait for kubelet
	I1115 09:33:40.546981  428896 kubeadm.go:587] duration metric: took 10.17796689s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:33:40.547004  428896 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:33:40.550887  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:40.550928  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:40.550955  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:40.550959  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:40.550963  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:40.550966  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:40.550969  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:40.550972  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:40.550976  428896 node_conditions.go:105] duration metric: took 3.967331ms to run NodePressure ...
	I1115 09:33:40.550987  428896 start.go:242] waiting for startup goroutines ...
	I1115 09:33:40.551013  428896 start.go:256] writing updated cluster config ...
	I1115 09:33:40.553290  428896 out.go:203] 
	I1115 09:33:40.555010  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:40.555154  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:40.556732  428896 out.go:179] * Starting "ha-577290-m03" control-plane node in "ha-577290" cluster
	I1115 09:33:40.558293  428896 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:33:40.559533  428896 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:33:40.560557  428896 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:33:40.560573  428896 cache.go:65] Caching tarball of preloaded images
	I1115 09:33:40.560658  428896 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:33:40.560677  428896 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:33:40.560686  428896 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:33:40.560802  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:40.581841  428896 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:33:40.581862  428896 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:33:40.581881  428896 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:33:40.581911  428896 start.go:360] acquireMachinesLock for ha-577290-m03: {Name:mk956e932a0a61462f744b4bf6dccfcc158f1677 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:33:40.581975  428896 start.go:364] duration metric: took 45.083µs to acquireMachinesLock for "ha-577290-m03"
	I1115 09:33:40.582000  428896 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:33:40.582009  428896 fix.go:54] fixHost starting: m03
	I1115 09:33:40.582213  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m03 --format={{.State.Status}}
	I1115 09:33:40.599708  428896 fix.go:112] recreateIfNeeded on ha-577290-m03: state=Stopped err=<nil>
	W1115 09:33:40.599741  428896 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:33:40.601856  428896 out.go:252] * Restarting existing docker container for "ha-577290-m03" ...
	I1115 09:33:40.601929  428896 cli_runner.go:164] Run: docker start ha-577290-m03
	I1115 09:33:40.883039  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m03 --format={{.State.Status}}
	I1115 09:33:40.902259  428896 kic.go:430] container "ha-577290-m03" state is running.
	I1115 09:33:40.902730  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m03
	I1115 09:33:40.923104  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:40.923365  428896 machine.go:94] provisionDockerMachine start ...
	I1115 09:33:40.923449  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:40.942829  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:40.943125  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1115 09:33:40.943143  428896 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:33:40.943747  428896 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45594->127.0.0.1:33194: read: connection reset by peer
	I1115 09:33:44.097198  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m03
	
	I1115 09:33:44.097227  428896 ubuntu.go:182] provisioning hostname "ha-577290-m03"
	I1115 09:33:44.097294  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:44.119447  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:44.119771  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1115 09:33:44.119790  428896 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-577290-m03 && echo "ha-577290-m03" | sudo tee /etc/hostname
	I1115 09:33:44.272682  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m03
	
	I1115 09:33:44.272754  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:44.292482  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:44.292709  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1115 09:33:44.292725  428896 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-577290-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-577290-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-577290-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:33:44.427118  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:33:44.427153  428896 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:33:44.427180  428896 ubuntu.go:190] setting up certificates
	I1115 09:33:44.427192  428896 provision.go:84] configureAuth start
	I1115 09:33:44.427251  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m03
	I1115 09:33:44.449125  428896 provision.go:143] copyHostCerts
	I1115 09:33:44.449170  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:33:44.449207  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:33:44.449220  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:33:44.449315  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:33:44.449479  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:33:44.449519  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:33:44.449527  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:33:44.449580  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:33:44.449658  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:33:44.449684  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:33:44.449692  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:33:44.449729  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:33:44.449848  428896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.ha-577290-m03 san=[127.0.0.1 192.168.49.4 ha-577290-m03 localhost minikube]
	I1115 09:33:44.532362  428896 provision.go:177] copyRemoteCerts
	I1115 09:33:44.532433  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:33:44.532473  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:44.550652  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:33:44.646162  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 09:33:44.646224  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:33:44.664161  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 09:33:44.664221  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:33:44.683656  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 09:33:44.683729  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 09:33:44.709533  428896 provision.go:87] duration metric: took 282.323517ms to configureAuth
	I1115 09:33:44.709568  428896 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:33:44.709953  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:44.710431  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:44.730924  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:44.731134  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1115 09:33:44.731151  428896 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:33:45.072969  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:33:45.073007  428896 machine.go:97] duration metric: took 4.149624743s to provisionDockerMachine
	I1115 09:33:45.073028  428896 start.go:293] postStartSetup for "ha-577290-m03" (driver="docker")
	I1115 09:33:45.073041  428896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:33:45.073117  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:33:45.073164  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:45.096852  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:33:45.197468  428896 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:33:45.201750  428896 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:33:45.201783  428896 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:33:45.201797  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:33:45.201858  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:33:45.201951  428896 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:33:45.201963  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /etc/ssl/certs/3590632.pem
	I1115 09:33:45.202075  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:33:45.210217  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:33:45.228458  428896 start.go:296] duration metric: took 155.41494ms for postStartSetup
	I1115 09:33:45.228526  428896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:33:45.228575  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:45.246932  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:33:45.337973  428896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:33:45.343136  428896 fix.go:56] duration metric: took 4.76111959s for fixHost
	I1115 09:33:45.343165  428896 start.go:83] releasing machines lock for "ha-577290-m03", held for 4.761175125s
	I1115 09:33:45.343237  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m03
	I1115 09:33:45.363267  428896 out.go:179] * Found network options:
	I1115 09:33:45.364603  428896 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1115 09:33:45.365919  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:45.365945  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:45.365965  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:45.365973  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 09:33:45.366049  428896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:33:45.366084  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:45.366197  428896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:33:45.366269  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:33:45.385469  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:33:45.385900  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:33:45.512144  428896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:33:45.539108  428896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:33:45.539183  428896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:33:45.548657  428896 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 09:33:45.548681  428896 start.go:496] detecting cgroup driver to use...
	I1115 09:33:45.548714  428896 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:33:45.548758  428896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:33:45.565030  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:33:45.578828  428896 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:33:45.578876  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:33:45.593659  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:33:45.606896  428896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:33:45.719282  428896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:33:45.833886  428896 docker.go:234] disabling docker service ...
	I1115 09:33:45.833972  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:33:45.849553  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:33:45.863178  428896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:33:46.002558  428896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:33:46.122751  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:33:46.135787  428896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:33:46.152335  428896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:33:46.152386  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.162211  428896 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:33:46.162288  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.172907  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.182146  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.191787  428896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:33:46.201198  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.211208  428896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.221525  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:46.231770  428896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:33:46.240242  428896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:33:46.248568  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:46.362978  428896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:33:46.529312  428896 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:33:46.529407  428896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:33:46.534021  428896 start.go:564] Will wait 60s for crictl version
	I1115 09:33:46.534084  428896 ssh_runner.go:195] Run: which crictl
	I1115 09:33:46.537777  428896 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:33:46.562624  428896 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:33:46.562720  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:46.593038  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:46.624612  428896 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:33:46.625782  428896 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 09:33:46.626701  428896 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1115 09:33:46.627918  428896 cli_runner.go:164] Run: docker network inspect ha-577290 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:33:46.647913  428896 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:33:46.652309  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:46.663365  428896 mustload.go:66] Loading cluster: ha-577290
	I1115 09:33:46.663617  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:46.663854  428896 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:33:46.683967  428896 host.go:66] Checking if "ha-577290" exists ...
	I1115 09:33:46.684227  428896 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290 for IP: 192.168.49.4
	I1115 09:33:46.684240  428896 certs.go:195] generating shared ca certs ...
	I1115 09:33:46.684254  428896 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:46.684373  428896 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:33:46.684442  428896 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:33:46.684456  428896 certs.go:257] generating profile certs ...
	I1115 09:33:46.684531  428896 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key
	I1115 09:33:46.684570  428896 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key.4e419922
	I1115 09:33:46.684607  428896 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key
	I1115 09:33:46.684619  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 09:33:46.684635  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 09:33:46.684648  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 09:33:46.684658  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 09:33:46.684670  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 09:33:46.684682  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 09:33:46.684694  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 09:33:46.684703  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 09:33:46.684763  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:33:46.684793  428896 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:33:46.684803  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:33:46.684825  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:33:46.684845  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:33:46.684867  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:33:46.684981  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:33:46.685022  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem -> /usr/share/ca-certificates/359063.pem
	I1115 09:33:46.685039  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /usr/share/ca-certificates/3590632.pem
	I1115 09:33:46.685052  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:46.685102  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:33:46.704190  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:33:46.792775  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1115 09:33:46.797318  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1115 09:33:46.806208  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1115 09:33:46.810016  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1115 09:33:46.819830  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1115 09:33:46.823486  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1115 09:33:46.831939  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1115 09:33:46.835879  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1115 09:33:46.844637  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1115 09:33:46.848667  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1115 09:33:46.857507  428896 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1115 09:33:46.861254  428896 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1115 09:33:46.870691  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:33:46.890068  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:33:46.908762  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:33:46.928604  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:33:46.946771  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1115 09:33:46.966008  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:33:46.985099  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:33:47.004286  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 09:33:47.023701  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:33:47.044426  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:33:47.063586  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:33:47.083517  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1115 09:33:47.097148  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1115 09:33:47.110614  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1115 09:33:47.125289  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1115 09:33:47.139218  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1115 09:33:47.152613  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1115 09:33:47.167316  428896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1115 09:33:47.186848  428896 ssh_runner.go:195] Run: openssl version
	I1115 09:33:47.196607  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:33:47.208413  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:47.212323  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:47.212377  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:47.248988  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:33:47.257951  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:33:47.270018  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:33:47.276511  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:33:47.276612  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:33:47.315123  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:33:47.324272  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:33:47.333692  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:33:47.337850  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:33:47.337904  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:33:47.377447  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:33:47.386605  428896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:33:47.390885  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 09:33:47.428238  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 09:33:47.463635  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 09:33:47.500538  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 09:33:47.537928  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 09:33:47.573729  428896 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 09:33:47.608297  428896 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1115 09:33:47.608438  428896 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-577290-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:33:47.608465  428896 kube-vip.go:115] generating kube-vip config ...
	I1115 09:33:47.608505  428896 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1115 09:33:47.621813  428896 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:33:47.621905  428896 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1115 09:33:47.621980  428896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:33:47.629857  428896 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:33:47.629945  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1115 09:33:47.638232  428896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:33:47.652261  428896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:33:47.666706  428896 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1115 09:33:47.681044  428896 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 09:33:47.685137  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:47.696618  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:47.811257  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:47.825255  428896 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:33:47.825603  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:47.827569  428896 out.go:179] * Verifying Kubernetes components...
	I1115 09:33:47.828637  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:47.945833  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:47.960377  428896 kapi.go:59] client config for ha-577290: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 09:33:47.960507  428896 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 09:33:47.960779  428896 node_ready.go:35] waiting up to 6m0s for node "ha-577290-m03" to be "Ready" ...
	I1115 09:33:47.964177  428896 node_ready.go:49] node "ha-577290-m03" is "Ready"
	I1115 09:33:47.964207  428896 node_ready.go:38] duration metric: took 3.409493ms for node "ha-577290-m03" to be "Ready" ...
	I1115 09:33:47.964220  428896 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:33:47.964274  428896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:33:47.976501  428896 api_server.go:72] duration metric: took 151.188832ms to wait for apiserver process to appear ...
	I1115 09:33:47.976526  428896 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:33:47.976549  428896 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1115 09:33:47.982576  428896 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1115 09:33:47.983614  428896 api_server.go:141] control plane version: v1.34.1
	I1115 09:33:47.983645  428896 api_server.go:131] duration metric: took 7.111217ms to wait for apiserver health ...
	I1115 09:33:47.983656  428896 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:33:47.990372  428896 system_pods.go:59] 26 kube-system pods found
	I1115 09:33:47.990422  428896 system_pods.go:61] "coredns-66bc5c9577-hcps6" [61783521-de69-4669-874c-b0a260551902] Running
	I1115 09:33:47.990429  428896 system_pods.go:61] "coredns-66bc5c9577-xqpdq" [929b4b9a-8741-413f-939e-68c92781b1eb] Running
	I1115 09:33:47.990435  428896 system_pods.go:61] "etcd-ha-577290" [3ab153af-3774-4da4-a72e-323d14056944] Running
	I1115 09:33:47.990441  428896 system_pods.go:61] "etcd-ha-577290-m02" [146e26b0-996a-4cf6-a1ac-4e50fc799d1e] Running
	I1115 09:33:47.990450  428896 system_pods.go:61] "etcd-ha-577290-m03" [c61afa72-7aa1-42b1-9844-ae2295e52813] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 09:33:47.990461  428896 system_pods.go:61] "kindnet-7xtwk" [82d2cc3a-bb9c-4fdd-8975-8c804cc2c4d3] Running
	I1115 09:33:47.990470  428896 system_pods.go:61] "kindnet-dsj4t" [73dc267e-1872-43d0-97a0-6dfffe4327ab] Running
	I1115 09:33:47.990481  428896 system_pods.go:61] "kindnet-k8kmn" [350338b0-7cd1-4a6e-8608-b9b16b4a5cac] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 09:33:47.990487  428896 system_pods.go:61] "kindnet-ltfl5" [d3873196-930a-44bb-87f0-684c93025bdc] Running
	I1115 09:33:47.990493  428896 system_pods.go:61] "kube-apiserver-ha-577290" [a23f028c-3c3b-4b50-a859-2624a47cf37e] Running
	I1115 09:33:47.990498  428896 system_pods.go:61] "kube-apiserver-ha-577290-m02" [d6fb6ef6-4266-45e7-93c3-76c5ff31c0c5] Running
	I1115 09:33:47.990505  428896 system_pods.go:61] "kube-apiserver-ha-577290-m03" [23b73095-c581-4178-be4c-26dd08f8d4dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 09:33:47.990511  428896 system_pods.go:61] "kube-controller-manager-ha-577290" [f28c8e92-79ec-45ba-87a1-f07151431d5c] Running
	I1115 09:33:47.990517  428896 system_pods.go:61] "kube-controller-manager-ha-577290-m02" [8daa249c-7866-4ad3-bd2f-aa94ef222eb7] Running
	I1115 09:33:47.990526  428896 system_pods.go:61] "kube-controller-manager-ha-577290-m03" [53c1116a-ca9c-4f6a-a317-2159d25ae09c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 09:33:47.990535  428896 system_pods.go:61] "kube-proxy-4j6b5" [67899ff8-aa1a-41d8-b7a3-4fea91a10fa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 09:33:47.990541  428896 system_pods.go:61] "kube-proxy-6mkwq" [e2ddd593-d255-4f3d-b008-72b920167540] Running
	I1115 09:33:47.990544  428896 system_pods.go:61] "kube-proxy-k6gmr" [9f25b23c-212d-4987-9d75-335a513ad8c2] Running
	I1115 09:33:47.990549  428896 system_pods.go:61] "kube-proxy-zkk5v" [57c4c9d1-9a69-4190-a1cc-0036d422972c] Running
	I1115 09:33:47.990557  428896 system_pods.go:61] "kube-scheduler-ha-577290" [09b6d338-2eb4-469c-ae21-a8e58b9c4622] Running
	I1115 09:33:47.990562  428896 system_pods.go:61] "kube-scheduler-ha-577290-m02" [7b3d6e56-319c-492f-8197-fb4c6c883fed] Running
	I1115 09:33:47.990570  428896 system_pods.go:61] "kube-scheduler-ha-577290-m03" [6d9b1eb9-2fa8-4bd5-b0a2-fa1b45c93b7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 09:33:47.990578  428896 system_pods.go:61] "kube-vip-ha-577290" [b451c58a-b25d-4697-b9c5-7e2fc03cea67] Running
	I1115 09:33:47.990584  428896 system_pods.go:61] "kube-vip-ha-577290-m02" [057ddd08-41fa-4738-a72c-a91a4e004fb1] Running
	I1115 09:33:47.990592  428896 system_pods.go:61] "kube-vip-ha-577290-m03" [7aaee1aa-2771-45e7-b0af-5c28f8c8a227] Running
	I1115 09:33:47.990597  428896 system_pods.go:61] "storage-provisioner" [c6bdc68a-8f6a-4b01-a166-66128641846b] Running
	I1115 09:33:47.990604  428896 system_pods.go:74] duration metric: took 6.940099ms to wait for pod list to return data ...
	I1115 09:33:47.990618  428896 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:33:47.993458  428896 default_sa.go:45] found service account: "default"
	I1115 09:33:47.993482  428896 default_sa.go:55] duration metric: took 2.857379ms for default service account to be created ...
	I1115 09:33:47.993492  428896 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:33:47.999362  428896 system_pods.go:86] 26 kube-system pods found
	I1115 09:33:47.999436  428896 system_pods.go:89] "coredns-66bc5c9577-hcps6" [61783521-de69-4669-874c-b0a260551902] Running
	I1115 09:33:47.999446  428896 system_pods.go:89] "coredns-66bc5c9577-xqpdq" [929b4b9a-8741-413f-939e-68c92781b1eb] Running
	I1115 09:33:47.999452  428896 system_pods.go:89] "etcd-ha-577290" [3ab153af-3774-4da4-a72e-323d14056944] Running
	I1115 09:33:47.999467  428896 system_pods.go:89] "etcd-ha-577290-m02" [146e26b0-996a-4cf6-a1ac-4e50fc799d1e] Running
	I1115 09:33:47.999481  428896 system_pods.go:89] "etcd-ha-577290-m03" [c61afa72-7aa1-42b1-9844-ae2295e52813] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 09:33:47.999488  428896 system_pods.go:89] "kindnet-7xtwk" [82d2cc3a-bb9c-4fdd-8975-8c804cc2c4d3] Running
	I1115 09:33:47.999498  428896 system_pods.go:89] "kindnet-dsj4t" [73dc267e-1872-43d0-97a0-6dfffe4327ab] Running
	I1115 09:33:47.999510  428896 system_pods.go:89] "kindnet-k8kmn" [350338b0-7cd1-4a6e-8608-b9b16b4a5cac] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 09:33:47.999520  428896 system_pods.go:89] "kindnet-ltfl5" [d3873196-930a-44bb-87f0-684c93025bdc] Running
	I1115 09:33:47.999527  428896 system_pods.go:89] "kube-apiserver-ha-577290" [a23f028c-3c3b-4b50-a859-2624a47cf37e] Running
	I1115 09:33:47.999536  428896 system_pods.go:89] "kube-apiserver-ha-577290-m02" [d6fb6ef6-4266-45e7-93c3-76c5ff31c0c5] Running
	I1115 09:33:47.999544  428896 system_pods.go:89] "kube-apiserver-ha-577290-m03" [23b73095-c581-4178-be4c-26dd08f8d4dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 09:33:47.999553  428896 system_pods.go:89] "kube-controller-manager-ha-577290" [f28c8e92-79ec-45ba-87a1-f07151431d5c] Running
	I1115 09:33:47.999561  428896 system_pods.go:89] "kube-controller-manager-ha-577290-m02" [8daa249c-7866-4ad3-bd2f-aa94ef222eb7] Running
	I1115 09:33:47.999573  428896 system_pods.go:89] "kube-controller-manager-ha-577290-m03" [53c1116a-ca9c-4f6a-a317-2159d25ae09c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 09:33:47.999586  428896 system_pods.go:89] "kube-proxy-4j6b5" [67899ff8-aa1a-41d8-b7a3-4fea91a10fa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 09:33:47.999594  428896 system_pods.go:89] "kube-proxy-6mkwq" [e2ddd593-d255-4f3d-b008-72b920167540] Running
	I1115 09:33:47.999602  428896 system_pods.go:89] "kube-proxy-k6gmr" [9f25b23c-212d-4987-9d75-335a513ad8c2] Running
	I1115 09:33:47.999608  428896 system_pods.go:89] "kube-proxy-zkk5v" [57c4c9d1-9a69-4190-a1cc-0036d422972c] Running
	I1115 09:33:47.999615  428896 system_pods.go:89] "kube-scheduler-ha-577290" [09b6d338-2eb4-469c-ae21-a8e58b9c4622] Running
	I1115 09:33:47.999623  428896 system_pods.go:89] "kube-scheduler-ha-577290-m02" [7b3d6e56-319c-492f-8197-fb4c6c883fed] Running
	I1115 09:33:47.999633  428896 system_pods.go:89] "kube-scheduler-ha-577290-m03" [6d9b1eb9-2fa8-4bd5-b0a2-fa1b45c93b7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 09:33:47.999642  428896 system_pods.go:89] "kube-vip-ha-577290" [b451c58a-b25d-4697-b9c5-7e2fc03cea67] Running
	I1115 09:33:47.999654  428896 system_pods.go:89] "kube-vip-ha-577290-m02" [057ddd08-41fa-4738-a72c-a91a4e004fb1] Running
	I1115 09:33:47.999660  428896 system_pods.go:89] "kube-vip-ha-577290-m03" [7aaee1aa-2771-45e7-b0af-5c28f8c8a227] Running
	I1115 09:33:47.999665  428896 system_pods.go:89] "storage-provisioner" [c6bdc68a-8f6a-4b01-a166-66128641846b] Running
	I1115 09:33:47.999676  428896 system_pods.go:126] duration metric: took 6.175615ms to wait for k8s-apps to be running ...
	I1115 09:33:47.999689  428896 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:33:47.999747  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:33:48.013321  428896 system_svc.go:56] duration metric: took 13.620486ms WaitForService to wait for kubelet
	I1115 09:33:48.013354  428896 kubeadm.go:587] duration metric: took 188.047542ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:33:48.013372  428896 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:33:48.017378  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:48.017414  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:48.017429  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:48.017435  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:48.017440  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:48.017446  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:48.017451  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:33:48.017456  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:33:48.017465  428896 node_conditions.go:105] duration metric: took 4.087504ms to run NodePressure ...
	I1115 09:33:48.017479  428896 start.go:242] waiting for startup goroutines ...
	I1115 09:33:48.017513  428896 start.go:256] writing updated cluster config ...
	I1115 09:33:48.019414  428896 out.go:203] 
	I1115 09:33:48.021095  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:48.021213  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:48.022801  428896 out.go:179] * Starting "ha-577290-m04" worker node in "ha-577290" cluster
	I1115 09:33:48.023813  428896 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:33:48.025033  428896 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:33:48.026034  428896 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:33:48.026051  428896 cache.go:65] Caching tarball of preloaded images
	I1115 09:33:48.026126  428896 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:33:48.026161  428896 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:33:48.026176  428896 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:33:48.026313  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:48.048674  428896 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:33:48.048695  428896 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:33:48.048712  428896 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:33:48.048737  428896 start.go:360] acquireMachinesLock for ha-577290-m04: {Name:mk727375190f43e7b9d23177818f3e0fe7e90632 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:33:48.048792  428896 start.go:364] duration metric: took 39.722µs to acquireMachinesLock for "ha-577290-m04"
	I1115 09:33:48.048810  428896 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:33:48.048817  428896 fix.go:54] fixHost starting: m04
	I1115 09:33:48.049018  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m04 --format={{.State.Status}}
	I1115 09:33:48.066458  428896 fix.go:112] recreateIfNeeded on ha-577290-m04: state=Stopped err=<nil>
	W1115 09:33:48.066487  428896 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:33:48.068426  428896 out.go:252] * Restarting existing docker container for "ha-577290-m04" ...
	I1115 09:33:48.068502  428896 cli_runner.go:164] Run: docker start ha-577290-m04
	I1115 09:33:48.374025  428896 cli_runner.go:164] Run: docker container inspect ha-577290-m04 --format={{.State.Status}}
	I1115 09:33:48.394334  428896 kic.go:430] container "ha-577290-m04" state is running.
	I1115 09:33:48.394855  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m04
	I1115 09:33:48.414950  428896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/config.json ...
	I1115 09:33:48.415224  428896 machine.go:94] provisionDockerMachine start ...
	I1115 09:33:48.415304  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:48.436207  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:48.436464  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1115 09:33:48.436478  428896 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:33:48.437107  428896 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46302->127.0.0.1:33199: read: connection reset by peer
	I1115 09:33:51.570007  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m04
	
	I1115 09:33:51.570038  428896 ubuntu.go:182] provisioning hostname "ha-577290-m04"
	I1115 09:33:51.570109  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:51.589648  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:51.589938  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1115 09:33:51.589956  428896 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-577290-m04 && echo "ha-577290-m04" | sudo tee /etc/hostname
	I1115 09:33:51.730555  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-577290-m04
	
	I1115 09:33:51.730652  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:51.749427  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:51.749732  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1115 09:33:51.749758  428896 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-577290-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-577290-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-577290-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:33:51.881659  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:33:51.881699  428896 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:33:51.881721  428896 ubuntu.go:190] setting up certificates
	I1115 09:33:51.881735  428896 provision.go:84] configureAuth start
	I1115 09:33:51.881795  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m04
	I1115 09:33:51.905477  428896 provision.go:143] copyHostCerts
	I1115 09:33:51.905520  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:33:51.905560  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:33:51.905565  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:33:51.905636  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:33:51.905713  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:33:51.905742  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:33:51.905749  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:33:51.905780  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:33:51.905850  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:33:51.905881  428896 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:33:51.905887  428896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:33:51.905918  428896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:33:51.905994  428896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.ha-577290-m04 san=[127.0.0.1 192.168.49.5 ha-577290-m04 localhost minikube]
	I1115 09:33:52.709519  428896 provision.go:177] copyRemoteCerts
	I1115 09:33:52.709588  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:33:52.709639  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:52.729670  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:33:52.827014  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 09:33:52.827074  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:33:52.845307  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 09:33:52.845373  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 09:33:52.864228  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 09:33:52.864311  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:33:52.882736  428896 provision.go:87] duration metric: took 1.000983567s to configureAuth
	I1115 09:33:52.882768  428896 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:33:52.882985  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:52.883086  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:52.901749  428896 main.go:143] libmachine: Using SSH client type: native
	I1115 09:33:52.901964  428896 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1115 09:33:52.901980  428896 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:33:53.158344  428896 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:33:53.158378  428896 machine.go:97] duration metric: took 4.74313086s to provisionDockerMachine
	I1115 09:33:53.158427  428896 start.go:293] postStartSetup for "ha-577290-m04" (driver="docker")
	I1115 09:33:53.158462  428896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:33:53.158540  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:33:53.158593  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:53.180692  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:33:53.278677  428896 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:33:53.282826  428896 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:33:53.282861  428896 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:33:53.282950  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:33:53.283052  428896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:33:53.283142  428896 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:33:53.283157  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /etc/ssl/certs/3590632.pem
	I1115 09:33:53.283256  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:33:53.292307  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:33:53.311030  428896 start.go:296] duration metric: took 152.582175ms for postStartSetup
	I1115 09:33:53.311119  428896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:33:53.311155  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:53.330486  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:33:53.423358  428896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:33:53.428267  428896 fix.go:56] duration metric: took 5.379444169s for fixHost
	I1115 09:33:53.428291  428896 start.go:83] releasing machines lock for "ha-577290-m04", held for 5.379488718s
	I1115 09:33:53.428356  428896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m04
	I1115 09:33:53.450722  428896 out.go:179] * Found network options:
	I1115 09:33:53.452273  428896 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1115 09:33:53.453579  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:53.453607  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:53.453616  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:53.453643  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:53.453660  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	W1115 09:33:53.453674  428896 proxy.go:120] fail to check proxy env: Error ip not in block
	I1115 09:33:53.453759  428896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:33:53.453807  428896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:33:53.453873  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:53.453813  428896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:33:53.472760  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:33:53.473149  428896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:33:53.627249  428896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:33:53.632573  428896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:33:53.632637  428896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:33:53.642178  428896 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 09:33:53.642206  428896 start.go:496] detecting cgroup driver to use...
	I1115 09:33:53.642240  428896 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:33:53.642300  428896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:33:53.657825  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:33:53.671742  428896 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:33:53.671815  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:33:53.687976  428896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:33:53.701149  428896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:33:53.785060  428896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:33:53.872517  428896 docker.go:234] disabling docker service ...
	I1115 09:33:53.872587  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:33:53.888847  428896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:33:53.902669  428896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:33:53.985655  428896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:33:54.076443  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:33:54.089637  428896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:33:54.104342  428896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:33:54.104514  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.113954  428896 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:33:54.114031  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.123713  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.133355  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.144683  428896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:33:54.153702  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.163284  428896 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.172255  428896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:33:54.181589  428896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:33:54.189668  428896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:33:54.197336  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:54.288186  428896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:33:54.403383  428896 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:33:54.403492  428896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:33:54.407772  428896 start.go:564] Will wait 60s for crictl version
	I1115 09:33:54.407839  428896 ssh_runner.go:195] Run: which crictl
	I1115 09:33:54.411798  428896 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:33:54.438501  428896 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:33:54.438607  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:54.468561  428896 ssh_runner.go:195] Run: crio --version
	I1115 09:33:54.499645  428896 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:33:54.501099  428896 out.go:179]   - env NO_PROXY=192.168.49.2
	I1115 09:33:54.502317  428896 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1115 09:33:54.503727  428896 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1115 09:33:54.505140  428896 cli_runner.go:164] Run: docker network inspect ha-577290 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:33:54.524109  428896 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1115 09:33:54.528569  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:54.539044  428896 mustload.go:66] Loading cluster: ha-577290
	I1115 09:33:54.539261  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:54.539487  428896 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:33:54.557777  428896 host.go:66] Checking if "ha-577290" exists ...
	I1115 09:33:54.558052  428896 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290 for IP: 192.168.49.5
	I1115 09:33:54.558069  428896 certs.go:195] generating shared ca certs ...
	I1115 09:33:54.558091  428896 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:33:54.558225  428896 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:33:54.558262  428896 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:33:54.558276  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 09:33:54.558292  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 09:33:54.558306  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 09:33:54.558319  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 09:33:54.558371  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:33:54.558419  428896 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:33:54.558431  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:33:54.558454  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:33:54.558475  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:33:54.558502  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:33:54.558543  428896 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:33:54.558573  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem -> /usr/share/ca-certificates/359063.pem
	I1115 09:33:54.558586  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /usr/share/ca-certificates/3590632.pem
	I1115 09:33:54.558599  428896 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:54.558619  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:33:54.581222  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:33:54.600809  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:33:54.619688  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:33:54.637947  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:33:54.657828  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:33:54.680584  428896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:33:54.710166  428896 ssh_runner.go:195] Run: openssl version
	I1115 09:33:54.717263  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:33:54.727158  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:54.731833  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:54.731883  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:33:54.768964  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:33:54.777707  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:33:54.787101  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:33:54.791155  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:33:54.791218  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:33:54.826198  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:33:54.835154  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:33:54.845054  428896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:33:54.849628  428896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:33:54.849691  428896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:33:54.888273  428896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:33:54.897198  428896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:33:54.901079  428896 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:33:54.901140  428896 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1115 09:33:54.901265  428896 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-577290-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-577290 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:33:54.901334  428896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:33:54.910356  428896 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:33:54.910503  428896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1115 09:33:54.919713  428896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:33:54.934154  428896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:33:54.948279  428896 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1115 09:33:54.952666  428896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:33:54.964534  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:55.052727  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:55.067727  428896 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1115 09:33:55.068040  428896 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:33:55.070111  428896 out.go:179] * Verifying Kubernetes components...
	I1115 09:33:55.071556  428896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:33:55.163626  428896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:33:55.178038  428896 kapi.go:59] client config for ha-577290: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1115 09:33:55.178107  428896 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1115 09:33:55.178364  428896 node_ready.go:35] waiting up to 6m0s for node "ha-577290-m04" to be "Ready" ...
	W1115 09:33:57.182074  428896 node_ready.go:57] node "ha-577290-m04" has "Ready":"Unknown" status (will retry)
	W1115 09:33:59.682695  428896 node_ready.go:57] node "ha-577290-m04" has "Ready":"Unknown" status (will retry)
	I1115 09:34:01.682637  428896 node_ready.go:49] node "ha-577290-m04" is "Ready"
	I1115 09:34:01.682668  428896 node_ready.go:38] duration metric: took 6.504287602s for node "ha-577290-m04" to be "Ready" ...
	I1115 09:34:01.682681  428896 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:34:01.682732  428896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:34:01.696758  428896 system_svc.go:56] duration metric: took 14.066869ms WaitForService to wait for kubelet
	I1115 09:34:01.696792  428896 kubeadm.go:587] duration metric: took 6.629025488s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:34:01.696815  428896 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:34:01.700561  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:34:01.700588  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:34:01.700599  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:34:01.700603  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:34:01.700606  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:34:01.700609  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:34:01.700612  428896 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:34:01.700615  428896 node_conditions.go:123] node cpu capacity is 8
	I1115 09:34:01.700619  428896 node_conditions.go:105] duration metric: took 3.798933ms to run NodePressure ...
	I1115 09:34:01.700630  428896 start.go:242] waiting for startup goroutines ...
	I1115 09:34:01.700652  428896 start.go:256] writing updated cluster config ...
	I1115 09:34:01.700940  428896 ssh_runner.go:195] Run: rm -f paused
	I1115 09:34:01.705190  428896 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:34:01.705690  428896 kapi.go:59] client config for ha-577290: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/ha-577290/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 09:34:01.714720  428896 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hcps6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.720476  428896 pod_ready.go:94] pod "coredns-66bc5c9577-hcps6" is "Ready"
	I1115 09:34:01.720506  428896 pod_ready.go:86] duration metric: took 5.756993ms for pod "coredns-66bc5c9577-hcps6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.720518  428896 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xqpdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.725758  428896 pod_ready.go:94] pod "coredns-66bc5c9577-xqpdq" is "Ready"
	I1115 09:34:01.725790  428896 pod_ready.go:86] duration metric: took 5.264346ms for pod "coredns-66bc5c9577-xqpdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.728618  428896 pod_ready.go:83] waiting for pod "etcd-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.733682  428896 pod_ready.go:94] pod "etcd-ha-577290" is "Ready"
	I1115 09:34:01.733713  428896 pod_ready.go:86] duration metric: took 5.068711ms for pod "etcd-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.733724  428896 pod_ready.go:83] waiting for pod "etcd-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.738674  428896 pod_ready.go:94] pod "etcd-ha-577290-m02" is "Ready"
	I1115 09:34:01.738702  428896 pod_ready.go:86] duration metric: took 4.96923ms for pod "etcd-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.738711  428896 pod_ready.go:83] waiting for pod "etcd-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:01.907175  428896 request.go:683] "Waited before sending request" delay="168.345879ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-577290-m03"
	I1115 09:34:02.106204  428896 request.go:683] "Waited before sending request" delay="195.32057ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m03"
	I1115 09:34:02.109590  428896 pod_ready.go:94] pod "etcd-ha-577290-m03" is "Ready"
	I1115 09:34:02.109621  428896 pod_ready.go:86] duration metric: took 370.905099ms for pod "etcd-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:02.307120  428896 request.go:683] "Waited before sending request" delay="197.367777ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1115 09:34:02.311497  428896 pod_ready.go:83] waiting for pod "kube-apiserver-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:02.506963  428896 request.go:683] "Waited before sending request" delay="195.356346ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-577290"
	I1115 09:34:02.706771  428896 request.go:683] "Waited before sending request" delay="196.448308ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290"
	I1115 09:34:02.710109  428896 pod_ready.go:94] pod "kube-apiserver-ha-577290" is "Ready"
	I1115 09:34:02.710139  428896 pod_ready.go:86] duration metric: took 398.612345ms for pod "kube-apiserver-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:02.710148  428896 pod_ready.go:83] waiting for pod "kube-apiserver-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:02.906594  428896 request.go:683] "Waited before sending request" delay="196.34557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-577290-m02"
	I1115 09:34:03.106336  428896 request.go:683] "Waited before sending request" delay="196.305201ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	I1115 09:34:03.109900  428896 pod_ready.go:94] pod "kube-apiserver-ha-577290-m02" is "Ready"
	I1115 09:34:03.109935  428896 pod_ready.go:86] duration metric: took 399.77994ms for pod "kube-apiserver-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:03.109947  428896 pod_ready.go:83] waiting for pod "kube-apiserver-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:03.306248  428896 request.go:683] "Waited before sending request" delay="196.205945ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-577290-m03"
	I1115 09:34:03.507032  428896 request.go:683] "Waited before sending request" delay="197.392595ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m03"
	I1115 09:34:03.509957  428896 pod_ready.go:94] pod "kube-apiserver-ha-577290-m03" is "Ready"
	I1115 09:34:03.509989  428896 pod_ready.go:86] duration metric: took 400.035581ms for pod "kube-apiserver-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:03.706553  428896 request.go:683] "Waited before sending request" delay="196.41245ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1115 09:34:03.710543  428896 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:03.907045  428896 request.go:683] "Waited before sending request" delay="196.330959ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-577290"
	I1115 09:34:04.106816  428896 request.go:683] "Waited before sending request" delay="196.427767ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290"
	I1115 09:34:04.110328  428896 pod_ready.go:94] pod "kube-controller-manager-ha-577290" is "Ready"
	I1115 09:34:04.110357  428896 pod_ready.go:86] duration metric: took 399.786401ms for pod "kube-controller-manager-ha-577290" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:04.110368  428896 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:04.306851  428896 request.go:683] "Waited before sending request" delay="196.351238ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-577290-m02"
	I1115 09:34:04.506506  428896 request.go:683] "Waited before sending request" delay="196.393036ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	I1115 09:34:04.509995  428896 pod_ready.go:94] pod "kube-controller-manager-ha-577290-m02" is "Ready"
	I1115 09:34:04.510025  428896 pod_ready.go:86] duration metric: took 399.650133ms for pod "kube-controller-manager-ha-577290-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:04.510034  428896 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:04.706646  428896 request.go:683] "Waited before sending request" delay="196.418062ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-577290-m03"
	I1115 09:34:04.906837  428896 request.go:683] "Waited before sending request" delay="196.369246ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m03"
	I1115 09:34:04.909799  428896 pod_ready.go:94] pod "kube-controller-manager-ha-577290-m03" is "Ready"
	I1115 09:34:04.909834  428896 pod_ready.go:86] duration metric: took 399.79293ms for pod "kube-controller-manager-ha-577290-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:05.106269  428896 request.go:683] "Waited before sending request" delay="196.284181ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1115 09:34:05.110078  428896 pod_ready.go:83] waiting for pod "kube-proxy-4j6b5" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:34:05.306484  428896 request.go:683] "Waited before sending request" delay="196.226116ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4j6b5"
	I1115 09:34:05.506233  428896 request.go:683] "Waited before sending request" delay="196.286404ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	I1115 09:34:05.706640  428896 request.go:683] "Waited before sending request" delay="96.270262ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4j6b5"
	I1115 09:34:05.906700  428896 request.go:683] "Waited before sending request" delay="196.368708ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	I1115 09:34:06.306548  428896 request.go:683] "Waited before sending request" delay="192.368837ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	I1115 09:34:06.707117  428896 request.go:683] "Waited before sending request" delay="93.270622ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-577290-m02"
	W1115 09:34:07.116563  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:09.617314  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:12.116956  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:14.616273  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:17.116371  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:19.116501  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:21.116689  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:23.116818  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:25.617234  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:28.117036  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:30.617226  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:33.116469  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:35.616777  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:37.617262  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:40.117449  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:42.117831  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:44.616287  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:46.618306  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:49.116723  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:51.616229  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:53.617820  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:56.116943  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:34:58.616333  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:00.616873  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:02.617011  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:05.117447  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:07.616106  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:09.616804  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:12.124337  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:14.616125  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:16.617016  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:19.118269  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:21.616189  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:23.617124  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:26.116836  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:28.117058  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:30.117374  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:32.618970  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:35.116227  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:37.117008  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:39.616965  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:42.116851  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:44.618213  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:47.116222  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:49.616933  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:52.116850  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:54.616756  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:57.116793  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:35:59.616644  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:02.116080  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:04.116718  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:06.618437  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:09.116036  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:11.116546  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:13.616999  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:16.117083  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:18.616365  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:20.616664  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:22.617250  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:25.116824  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:27.116961  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:29.616385  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:32.116865  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:34.616343  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:36.616981  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:39.117055  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:41.616357  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:43.616462  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:45.616976  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:48.117111  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:50.616999  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:53.115913  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:55.116281  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:57.616365  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:36:59.616778  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:02.116803  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:04.615843  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:06.616292  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:08.617646  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:11.116723  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:13.116830  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:15.616517  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:18.116690  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:20.616314  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:23.116309  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:25.116508  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:27.117035  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:29.617437  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:32.116146  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:34.116964  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:36.616844  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:39.115867  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:41.116493  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:43.616383  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:45.617047  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:48.116809  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:50.617022  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:53.116939  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:55.615892  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:37:57.616280  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	W1115 09:38:00.116339  428896 pod_ready.go:104] pod "kube-proxy-4j6b5" is not "Ready", error: <nil>
	I1115 09:38:01.705542  428896 pod_ready.go:86] duration metric: took 3m56.595425039s for pod "kube-proxy-4j6b5" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 09:38:01.705579  428896 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1115 09:38:01.705595  428896 pod_ready.go:40] duration metric: took 4m0.000371267s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:38:01.707088  428896 out.go:203] 
	W1115 09:38:01.708237  428896 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1115 09:38:01.709353  428896 out.go:203] 
	
	
	==> CRI-O <==
	Nov 15 09:31:58 ha-577290 crio[579]: time="2025-11-15T09:31:58.166448503Z" level=info msg="Starting container: cee33caab4e63c53b0f16030d6b7e5ed117b6d8deb336214e6325e4c21565d5d" id=c1b92e8f-c9f4-4c82-a41e-6504366337f3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:31:58 ha-577290 crio[579]: time="2025-11-15T09:31:58.170004607Z" level=info msg="Started container" PID=1093 containerID=cee33caab4e63c53b0f16030d6b7e5ed117b6d8deb336214e6325e4c21565d5d description=kube-system/kube-proxy-zkk5v/kube-proxy id=c1b92e8f-c9f4-4c82-a41e-6504366337f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=14039ab33aafde8013836c1fd46872278f5297798f6c07d283d68a97ea4583f7
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.627923874Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.632195153Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.632221855Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.632240686Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.636286833Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.636323988Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.636345278Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.640494776Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.640533607Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.640558413Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.644494386Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 09:32:08 ha-577290 crio[579]: time="2025-11-15T09:32:08.644530081Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.888148988Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=98eacdea-1a37-499f-8909-be6da1da2735 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.889164748Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0ea4584e-7de3-4971-b7a9-982693ba6272 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.890416359Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=41502b7a-3213-48de-9ecf-63187b27ee99 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.890583199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.896320599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.896586136Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/32262e33a9ace53eeef0ce8cec406ff2f8080ce5fcc81622a4d5a449e4254a8a/merged/etc/passwd: no such file or directory"
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.89662929Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/32262e33a9ace53eeef0ce8cec406ff2f8080ce5fcc81622a4d5a449e4254a8a/merged/etc/group: no such file or directory"
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.896965362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.924907138Z" level=info msg="Created container 15f99c8b7c3ec74fa6cd3825acae110d7aaa10d4ae4bc392f84d2694551fea64: kube-system/storage-provisioner/storage-provisioner" id=41502b7a-3213-48de-9ecf-63187b27ee99 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.925582994Z" level=info msg="Starting container: 15f99c8b7c3ec74fa6cd3825acae110d7aaa10d4ae4bc392f84d2694551fea64" id=f7d7c439-9d04-49d3-8fb2-a5cb5724fe0d name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:32:28 ha-577290 crio[579]: time="2025-11-15T09:32:28.927550703Z" level=info msg="Started container" PID=1384 containerID=15f99c8b7c3ec74fa6cd3825acae110d7aaa10d4ae4bc392f84d2694551fea64 description=kube-system/storage-provisioner/storage-provisioner id=f7d7c439-9d04-49d3-8fb2-a5cb5724fe0d name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e2c23b2acccac36730090ee320863048bfa4890601874d8655723187b870ec5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	15f99c8b7c3ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 minutes ago       Running             storage-provisioner       1                   1e2c23b2accca       storage-provisioner                 kube-system
	af67b5a139cd4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   0                   dee6193939725       coredns-66bc5c9577-hcps6            kube-system
	6327e6dd1bf4f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   0                   0425b8d4bc632       coredns-66bc5c9577-xqpdq            kube-system
	aea52b96c3cdc       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   6 minutes ago       Running             busybox                   1                   41aaed08227fa       busybox-7b57f96db7-wzz75            default
	cee33caab4e63       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 minutes ago       Running             kube-proxy                0                   14039ab33aafd       kube-proxy-zkk5v                    kube-system
	db97b636d1fb3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Exited              storage-provisioner       0                   1e2c23b2accca       storage-provisioner                 kube-system
	35abc581515dc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 minutes ago       Running             kindnet-cni               0                   8743618d26aee       kindnet-dsj4t                       kube-system
	f33da4a57e7ab       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 minutes ago       Running             etcd                      0                   dc03af57a1b95       etcd-ha-577290                      kube-system
	6a62ffd50e27a       ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38   6 minutes ago       Running             kube-vip                  0                   e33baac547bf7       kube-vip-ha-577290                  kube-system
	98b9fc9a33f0b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   6 minutes ago       Running             kube-apiserver            0                   8ef9eeee65fdd       kube-apiserver-ha-577290            kube-system
	bf31a86759567       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   6 minutes ago       Running             kube-scheduler            0                   dabbff5016f34       kube-scheduler-ha-577290            kube-system
	aa99d93bfb488       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   6 minutes ago       Running             kube-controller-manager   0                   e6c1abddb49a1       kube-controller-manager-ha-577290   kube-system
	
	
	==> coredns [6327e6dd1bf4f46a1bf0de49d7f69cdd31bbfbeebe3c41e363eb0c978600cefc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53562 - 60738 "HINFO IN 3413401309951715269.3888521406455700014. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023013792s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [af67b5a139cd4598535eb46e6ae6be357b66b795698048e10bf4fbc158e6b4bc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51957 - 16081 "HINFO IN 3362391939732844574.4975598062171033207. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.05710445s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-577290
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-577290
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=ha-577290
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_27_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:26:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-577290
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:38:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:37:34 +0000   Sat, 15 Nov 2025 09:26:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:37:34 +0000   Sat, 15 Nov 2025 09:26:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:37:34 +0000   Sat, 15 Nov 2025 09:26:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:37:34 +0000   Sat, 15 Nov 2025 09:27:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-577290
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                17b25390-0a5d-4f6f-a9da-379a9ddec8f9
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wzz75             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 coredns-66bc5c9577-hcps6             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 coredns-66bc5c9577-xqpdq             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-ha-577290                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-dsj4t                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-577290             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-577290    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-zkk5v                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-577290             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-577290                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 6m17s                  kube-proxy       
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-577290 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-577290 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-577290 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node ha-577290 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node ha-577290 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node ha-577290 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           11m                    node-controller  Node ha-577290 event: Registered Node ha-577290 in Controller
	  Normal  NodeReady                11m                    kubelet          Node ha-577290 status is now: NodeReady
	  Normal  RegisteredNode           10m                    node-controller  Node ha-577290 event: Registered Node ha-577290 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-577290 event: Registered Node ha-577290 in Controller
	  Normal  RegisteredNode           7m30s                  node-controller  Node ha-577290 event: Registered Node ha-577290 in Controller
	  Normal  Starting                 6m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m25s (x8 over 6m25s)  kubelet          Node ha-577290 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s (x8 over 6m25s)  kubelet          Node ha-577290 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m25s (x8 over 6m25s)  kubelet          Node ha-577290 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-577290 event: Registered Node ha-577290 in Controller
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-577290 event: Registered Node ha-577290 in Controller
	  Normal  RegisteredNode           4m29s                  node-controller  Node ha-577290 event: Registered Node ha-577290 in Controller
	
	
	Name:               ha-577290-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-577290-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=ha-577290
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T09_27_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:27:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-577290-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:38:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:38:15 +0000   Sat, 15 Nov 2025 09:27:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:38:15 +0000   Sat, 15 Nov 2025 09:27:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:38:15 +0000   Sat, 15 Nov 2025 09:27:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:38:15 +0000   Sat, 15 Nov 2025 09:33:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-577290-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                cf61e038-9210-463f-800d-6938cf508c1f
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-n4kml                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 etcd-ha-577290-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-k8kmn                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-577290-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-577290-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-4j6b5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-577290-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-577290-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   RegisteredNode           10m                    node-controller  Node ha-577290-m02 event: Registered Node ha-577290-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-577290-m02 event: Registered Node ha-577290-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-577290-m02 event: Registered Node ha-577290-m02 in Controller
	  Normal   NodeHasSufficientMemory  7m35s (x8 over 7m35s)  kubelet          Node ha-577290-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     7m35s (x8 over 7m35s)  kubelet          Node ha-577290-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m35s (x8 over 7m35s)  kubelet          Node ha-577290-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 7m35s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           7m30s                  node-controller  Node ha-577290-m02 event: Registered Node ha-577290-m02 in Controller
	  Normal   Starting                 6m23s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m23s (x8 over 6m23s)  kubelet          Node ha-577290-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m23s (x8 over 6m23s)  kubelet          Node ha-577290-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m23s (x8 over 6m23s)  kubelet          Node ha-577290-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m16s                  node-controller  Node ha-577290-m02 event: Registered Node ha-577290-m02 in Controller
	  Normal   RegisteredNode           6m16s                  node-controller  Node ha-577290-m02 event: Registered Node ha-577290-m02 in Controller
	  Warning  ContainerGCFailed        5m23s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m29s                  node-controller  Node ha-577290-m02 event: Registered Node ha-577290-m02 in Controller
	
	
	Name:               ha-577290-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-577290-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=ha-577290
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_15T09_29_23_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:29:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-577290-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:38:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:37:56 +0000   Sat, 15 Nov 2025 09:34:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:37:56 +0000   Sat, 15 Nov 2025 09:34:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:37:56 +0000   Sat, 15 Nov 2025 09:34:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:37:56 +0000   Sat, 15 Nov 2025 09:34:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-577290-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                217e31dc-4cef-4738-9773-fc168032cffb
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-2k69t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 kindnet-7xtwk               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m53s
	  kube-system                 kube-proxy-6mkwq            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  Starting                 8m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m53s (x3 over 8m53s)  kubelet          Node ha-577290-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           8m53s                  node-controller  Node ha-577290-m04 event: Registered Node ha-577290-m04 in Controller
	  Normal  NodeHasSufficientPID     8m53s (x3 over 8m53s)  kubelet          Node ha-577290-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m53s (x3 over 8m53s)  kubelet          Node ha-577290-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           8m49s                  node-controller  Node ha-577290-m04 event: Registered Node ha-577290-m04 in Controller
	  Normal  RegisteredNode           8m49s                  node-controller  Node ha-577290-m04 event: Registered Node ha-577290-m04 in Controller
	  Normal  NodeReady                8m10s                  kubelet          Node ha-577290-m04 status is now: NodeReady
	  Normal  RegisteredNode           7m30s                  node-controller  Node ha-577290-m04 event: Registered Node ha-577290-m04 in Controller
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-577290-m04 event: Registered Node ha-577290-m04 in Controller
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-577290-m04 event: Registered Node ha-577290-m04 in Controller
	  Normal  NodeNotReady             5m26s                  node-controller  Node ha-577290-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           4m29s                  node-controller  Node ha-577290-m04 event: Registered Node ha-577290-m04 in Controller
	  Normal  Starting                 4m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m25s (x8 over 4m28s)  kubelet          Node ha-577290-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s (x8 over 4m28s)  kubelet          Node ha-577290-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s (x8 over 4m28s)  kubelet          Node ha-577290-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [f33da4a57e7abac3ebb4c2bb796754d89a55d77cae917a4638e1dc7bb54b55b9] <==
	{"level":"info","ts":"2025-11-15T09:33:41.995057Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"9a2bf1be0b18fe46","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-15T09:33:41.995115Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"info","ts":"2025-11-15T09:33:42.005335Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"info","ts":"2025-11-15T09:33:42.005461Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"warn","ts":"2025-11-15T09:33:42.397124Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9a2bf1be0b18fe46","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T09:33:42.397161Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9a2bf1be0b18fe46","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-15T09:38:07.666119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:60092","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T09:38:07.702603Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(6181255524538242371 12593026477526642892)"}
	{"level":"info","ts":"2025-11-15T09:38:07.703592Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"9a2bf1be0b18fe46","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-11-15T09:38:07.703628Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"warn","ts":"2025-11-15T09:38:07.703775Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"info","ts":"2025-11-15T09:38:07.703801Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"warn","ts":"2025-11-15T09:38:07.703944Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"info","ts":"2025-11-15T09:38:07.704010Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"info","ts":"2025-11-15T09:38:07.704072Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"warn","ts":"2025-11-15T09:38:07.704229Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","error":"context canceled"}
	{"level":"warn","ts":"2025-11-15T09:38:07.704309Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"9a2bf1be0b18fe46","error":"failed to read 9a2bf1be0b18fe46 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-11-15T09:38:07.704334Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"warn","ts":"2025-11-15T09:38:07.704475Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46","error":"context canceled"}
	{"level":"info","ts":"2025-11-15T09:38:07.704498Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"info","ts":"2025-11-15T09:38:07.704508Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"info","ts":"2025-11-15T09:38:07.704522Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"9a2bf1be0b18fe46"}
	{"level":"info","ts":"2025-11-15T09:38:07.704545Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"9a2bf1be0b18fe46"}
	{"level":"warn","ts":"2025-11-15T09:38:07.710323Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"9a2bf1be0b18fe46"}
	{"level":"warn","ts":"2025-11-15T09:38:07.712875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:52838","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:38:16 up  1:20,  0 user,  load average: 0.49, 0.90, 1.11
	Linux ha-577290 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [35abc581515dce0fd200cca6331404c3173165c3dfb1cc5aeb6f1044b505b43a] <==
	I1115 09:37:38.628070       1 main.go:324] Node ha-577290-m04 has CIDR [10.244.3.0/24] 
	I1115 09:37:48.628812       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:37:48.628852       1 main.go:301] handling current node
	I1115 09:37:48.628872       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 09:37:48.628879       1 main.go:324] Node ha-577290-m02 has CIDR [10.244.1.0/24] 
	I1115 09:37:48.629073       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1115 09:37:48.629085       1 main.go:324] Node ha-577290-m03 has CIDR [10.244.2.0/24] 
	I1115 09:37:48.629195       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 09:37:48.629204       1 main.go:324] Node ha-577290-m04 has CIDR [10.244.3.0/24] 
	I1115 09:37:58.627110       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:37:58.627140       1 main.go:301] handling current node
	I1115 09:37:58.627156       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 09:37:58.627161       1 main.go:324] Node ha-577290-m02 has CIDR [10.244.1.0/24] 
	I1115 09:37:58.627354       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1115 09:37:58.627366       1 main.go:324] Node ha-577290-m03 has CIDR [10.244.2.0/24] 
	I1115 09:37:58.627527       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 09:37:58.627542       1 main.go:324] Node ha-577290-m04 has CIDR [10.244.3.0/24] 
	I1115 09:38:08.626926       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1115 09:38:08.626980       1 main.go:324] Node ha-577290-m02 has CIDR [10.244.1.0/24] 
	I1115 09:38:08.627218       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1115 09:38:08.627248       1 main.go:324] Node ha-577290-m03 has CIDR [10.244.2.0/24] 
	I1115 09:38:08.627343       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1115 09:38:08.627350       1 main.go:324] Node ha-577290-m04 has CIDR [10.244.3.0/24] 
	I1115 09:38:08.627452       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:38:08.627460       1 main.go:301] handling current node
	
	
	==> kube-apiserver [98b9fc9a33f0b40586e635c881668594f59cdd960b26204a457a95a2020bd154] <==
	I1115 09:31:57.310740       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 09:31:57.310755       1 cache.go:39] Caches are synced for autoregister controller
	I1115 09:31:57.310792       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 09:31:57.311036       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 09:31:57.311382       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 09:31:57.311580       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 09:31:57.311710       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 09:31:57.311719       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 09:31:57.311945       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 09:31:57.318282       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1115 09:31:57.319906       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 09:31:57.327751       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 09:31:57.327785       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 09:31:57.327815       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 09:31:57.336835       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 09:31:57.345583       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 09:31:57.345612       1 policy_source.go:240] refreshing policies
	I1115 09:31:57.367345       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:31:57.862499       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 09:31:58.217616       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 09:32:00.980325       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 09:32:01.031732       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 09:32:01.073303       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 09:32:29.834834       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 09:32:29.848629       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [aa99d93bfb4888fbc03108f08590c503f95f20e1969eabb19d4a76ea1be94d6f] <==
	I1115 09:32:00.676716       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 09:32:00.676762       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:32:00.678945       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 09:32:00.679002       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 09:32:00.679070       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-577290-m04"
	I1115 09:32:00.679107       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-577290-m02"
	I1115 09:32:00.679115       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-577290"
	I1115 09:32:00.679195       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-577290-m03"
	I1115 09:32:00.679235       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 09:32:00.681589       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 09:32:00.684863       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 09:32:00.690446       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 09:32:00.708870       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 09:32:00.709008       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-577290-m04"
	I1115 09:32:00.716566       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 09:32:00.719799       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 09:32:00.720867       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 09:32:29.843480       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-985s2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-985s2\": the object has been modified; please apply your changes to the latest version and try again"
	I1115 09:32:29.843563       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c1d8f070-a8e6-4a4e-bd8c-daa4e92e5c06", APIVersion:"v1", ResourceVersion:"312", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-985s2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-985s2": the object has been modified; please apply your changes to the latest version and try again
	I1115 09:32:38.808314       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-577290-m04"
	I1115 09:32:50.688455       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	I1115 09:33:40.817972       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 09:34:01.605376       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-577290-m04"
	I1115 09:38:09.886660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-577290-m04"
	E1115 09:38:09.914531       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-577290-m03\", UID:\"8f188603-ae89-406d-8588-c4cf50c1417b\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noC
opy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-577290-m03\", UID:\"cbb06dcb-1086-452f-8c4e-5e4ee1e6bd2c\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-577290-m03\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [cee33caab4e63c53b0f16030d6b7e5ed117b6d8deb336214e6325e4c21565d5d] <==
	I1115 09:31:58.220911       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:31:58.302196       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:31:58.403132       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:31:58.403172       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:31:58.403265       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:31:58.427987       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:31:58.428075       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:31:58.437466       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:31:58.438106       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:31:58.438156       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:31:58.440186       1 config.go:309] "Starting node config controller"
	I1115 09:31:58.440198       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:31:58.440205       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:31:58.440496       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:31:58.440506       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:31:58.440537       1 config.go:200] "Starting service config controller"
	I1115 09:31:58.440546       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:31:58.440559       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:31:58.440583       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:31:58.541578       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 09:31:58.541927       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:31:58.542004       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [bf31a867595678c370bce5d49663eec7f39f09c0ffba1367b034ab02c073ea71] <==
	I1115 09:31:52.778130       1 serving.go:386] Generated self-signed cert in-memory
	I1115 09:31:57.295411       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 09:31:57.295437       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:31:57.300344       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:31:57.300357       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 09:31:57.300378       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:31:57.300385       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 09:31:57.300429       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 09:31:57.300385       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 09:31:57.300718       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 09:31:57.300752       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 09:31:57.401581       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 09:31:57.401710       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 09:31:57.401738       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1115 09:38:04.501275       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-2k69t\": pod busybox-7b57f96db7-2k69t is already assigned to node \"ha-577290-m04\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-2k69t" node="ha-577290-m04"
	E1115 09:38:04.501378       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 51be4f93-e340-4d0a-990c-4e84b04df5fe(default/busybox-7b57f96db7-2k69t) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-2k69t"
	E1115 09:38:04.501940       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-2k69t\": pod busybox-7b57f96db7-2k69t is already assigned to node \"ha-577290-m04\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-2k69t"
	I1115 09:38:04.503080       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-2k69t" node="ha-577290-m04"
	
	
	==> kubelet <==
	Nov 15 09:31:57 ha-577290 kubelet[750]: E1115 09:31:57.406670     750 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-vip-ha-577290\" already exists" pod="kube-system/kube-vip-ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.406703     750 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: E1115 09:31:57.413764     750 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-577290\" already exists" pod="kube-system/etcd-ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.413799     750 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: E1115 09:31:57.421121     750 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-577290\" already exists" pod="kube-system/kube-apiserver-ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.438492     750 kubelet_node_status.go:124] "Node was previously registered" node="ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.438580     750 kubelet_node_status.go:78] "Successfully registered node" node="ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.438616     750 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.439452     750 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.750649     750 apiserver.go:52] "Watching apiserver"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.754485     750 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-577290" podUID="8b3a5624-ba15-4654-b2b4-c63e078af3c6"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.766414     750 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.766442     750 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-577290"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.774841     750 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f8477f86b3c1a1379dba41d926e4d5" path="/var/lib/kubelet/pods/93f8477f86b3c1a1379dba41d926e4d5/volumes"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.799320     750 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-577290" podUID="8b3a5624-ba15-4654-b2b4-c63e078af3c6"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.837154     750 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-577290" podStartSLOduration=0.837134922 podStartE2EDuration="837.134922ms" podCreationTimestamp="2025-11-15 09:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:31:57.836889047 +0000 UTC m=+6.151661690" watchObservedRunningTime="2025-11-15 09:31:57.837134922 +0000 UTC m=+6.151907564"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.851377     750 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.858916     750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73dc267e-1872-43d0-97a0-6dfffe4327ab-lib-modules\") pod \"kindnet-dsj4t\" (UID: \"73dc267e-1872-43d0-97a0-6dfffe4327ab\") " pod="kube-system/kindnet-dsj4t"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.859052     750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57c4c9d1-9a69-4190-a1cc-0036d422972c-lib-modules\") pod \"kube-proxy-zkk5v\" (UID: \"57c4c9d1-9a69-4190-a1cc-0036d422972c\") " pod="kube-system/kube-proxy-zkk5v"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.859100     750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73dc267e-1872-43d0-97a0-6dfffe4327ab-xtables-lock\") pod \"kindnet-dsj4t\" (UID: \"73dc267e-1872-43d0-97a0-6dfffe4327ab\") " pod="kube-system/kindnet-dsj4t"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.859126     750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/73dc267e-1872-43d0-97a0-6dfffe4327ab-cni-cfg\") pod \"kindnet-dsj4t\" (UID: \"73dc267e-1872-43d0-97a0-6dfffe4327ab\") " pod="kube-system/kindnet-dsj4t"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.859180     750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c6bdc68a-8f6a-4b01-a166-66128641846b-tmp\") pod \"storage-provisioner\" (UID: \"c6bdc68a-8f6a-4b01-a166-66128641846b\") " pod="kube-system/storage-provisioner"
	Nov 15 09:31:57 ha-577290 kubelet[750]: I1115 09:31:57.859201     750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57c4c9d1-9a69-4190-a1cc-0036d422972c-xtables-lock\") pod \"kube-proxy-zkk5v\" (UID: \"57c4c9d1-9a69-4190-a1cc-0036d422972c\") " pod="kube-system/kube-proxy-zkk5v"
	Nov 15 09:32:06 ha-577290 kubelet[750]: I1115 09:32:06.508847     750 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 15 09:32:28 ha-577290 kubelet[750]: I1115 09:32:28.887731     750 scope.go:117] "RemoveContainer" containerID="db97b636d1fb37a94b9cc153f99d6526bb0228407a65710988f5f94aa08f1910"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-577290 -n ha-577290
helpers_test.go:269: (dbg) Run:  kubectl --context ha-577290 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.98s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-921712 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-921712 --output=json --user=testUser: exit status 80 (1.977340679s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fa5b954f-66a4-4b73-a6f8-6a51b99fe3b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-921712 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"fffddf74-a85d-408c-a100-4b245271005b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-15T09:41:44Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"30077c8a-fdcb-4919-9c5b-4645201ef2f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-921712 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.98s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.82s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-921712 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-921712 --output=json --user=testUser: exit status 80 (1.819035241s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cd00836f-b3ed-493f-91cd-c9a2e5af5960","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-921712 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"8dab32e0-9c53-45be-9da6-70406dfc0d80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-15T09:41:46Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"4bf27348-9499-43c1-9aad-880da0dc45e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-921712 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.82s)

                                                
                                    
x
+
TestPause/serial/Pause (6.25s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-717282 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-717282 --alsologtostderr -v=5: exit status 80 (2.527229639s)

                                                
                                                
-- stdout --
	* Pausing node pause-717282 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:57:28.182204  569179 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:57:28.182498  569179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:57:28.182509  569179 out.go:374] Setting ErrFile to fd 2...
	I1115 09:57:28.182513  569179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:57:28.182714  569179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:57:28.182929  569179 out.go:368] Setting JSON to false
	I1115 09:57:28.182982  569179 mustload.go:66] Loading cluster: pause-717282
	I1115 09:57:28.183329  569179 config.go:182] Loaded profile config "pause-717282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:57:28.183748  569179 cli_runner.go:164] Run: docker container inspect pause-717282 --format={{.State.Status}}
	I1115 09:57:28.202073  569179 host.go:66] Checking if "pause-717282" exists ...
	I1115 09:57:28.202333  569179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:57:28.262919  569179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-15 09:57:28.250756449 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:57:28.263608  569179 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-717282 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 09:57:28.265896  569179 out.go:179] * Pausing node pause-717282 ... 
	I1115 09:57:28.270832  569179 host.go:66] Checking if "pause-717282" exists ...
	I1115 09:57:28.271094  569179 ssh_runner.go:195] Run: systemctl --version
	I1115 09:57:28.271133  569179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:28.295036  569179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/pause-717282/id_rsa Username:docker}
	I1115 09:57:28.388110  569179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:57:28.404709  569179 pause.go:52] kubelet running: true
	I1115 09:57:28.404792  569179 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 09:57:28.549454  569179 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 09:57:28.549570  569179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 09:57:28.620553  569179 cri.go:89] found id: "417a035d5497ae4550d197115037bf48f90f3f5569544d6634d6d1f36a76c43b"
	I1115 09:57:28.620581  569179 cri.go:89] found id: "4be3d6cc9c88485048b93b7f1eedbd5a0cd4cb1111e7f6c1f3469248da583895"
	I1115 09:57:28.620587  569179 cri.go:89] found id: "44120701377c86a13941cce86ade9f62a6acf1b52ff16f8ddb305e7f21f14bf4"
	I1115 09:57:28.620592  569179 cri.go:89] found id: "d5226f6ec3310e6db8de828f50b650f234c2d4352ca764002df22a8028216813"
	I1115 09:57:28.620596  569179 cri.go:89] found id: "c955341aff41e582eb4cf3e7968bbb7511c6d5aa6ccde02971fa779ea9ba7dcd"
	I1115 09:57:28.620600  569179 cri.go:89] found id: "97f13c5dcb417ee07f2a82877efd1a85d01e15f288397c88502ef56901503132"
	I1115 09:57:28.620604  569179 cri.go:89] found id: "eedd1774f1da1143522fb65556223529755ab408b4a76ab65da2a9a4dd980407"
	I1115 09:57:28.620608  569179 cri.go:89] found id: ""
	I1115 09:57:28.620654  569179 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:57:28.632629  569179 retry.go:31] will retry after 361.870925ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:57:28Z" level=error msg="open /run/runc: no such file or directory"
	I1115 09:57:28.995248  569179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:57:29.009251  569179 pause.go:52] kubelet running: false
	I1115 09:57:29.009302  569179 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 09:57:29.127617  569179 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 09:57:29.127721  569179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 09:57:29.206279  569179 cri.go:89] found id: "417a035d5497ae4550d197115037bf48f90f3f5569544d6634d6d1f36a76c43b"
	I1115 09:57:29.206308  569179 cri.go:89] found id: "4be3d6cc9c88485048b93b7f1eedbd5a0cd4cb1111e7f6c1f3469248da583895"
	I1115 09:57:29.206315  569179 cri.go:89] found id: "44120701377c86a13941cce86ade9f62a6acf1b52ff16f8ddb305e7f21f14bf4"
	I1115 09:57:29.206318  569179 cri.go:89] found id: "d5226f6ec3310e6db8de828f50b650f234c2d4352ca764002df22a8028216813"
	I1115 09:57:29.206322  569179 cri.go:89] found id: "c955341aff41e582eb4cf3e7968bbb7511c6d5aa6ccde02971fa779ea9ba7dcd"
	I1115 09:57:29.206324  569179 cri.go:89] found id: "97f13c5dcb417ee07f2a82877efd1a85d01e15f288397c88502ef56901503132"
	I1115 09:57:29.206327  569179 cri.go:89] found id: "eedd1774f1da1143522fb65556223529755ab408b4a76ab65da2a9a4dd980407"
	I1115 09:57:29.206329  569179 cri.go:89] found id: ""
	I1115 09:57:29.206368  569179 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:57:29.219571  569179 retry.go:31] will retry after 254.646639ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:57:29Z" level=error msg="open /run/runc: no such file or directory"
	I1115 09:57:29.475162  569179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:57:29.489916  569179 pause.go:52] kubelet running: false
	I1115 09:57:29.489988  569179 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 09:57:29.641970  569179 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 09:57:29.642052  569179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 09:57:29.730442  569179 cri.go:89] found id: "417a035d5497ae4550d197115037bf48f90f3f5569544d6634d6d1f36a76c43b"
	I1115 09:57:29.730471  569179 cri.go:89] found id: "4be3d6cc9c88485048b93b7f1eedbd5a0cd4cb1111e7f6c1f3469248da583895"
	I1115 09:57:29.730477  569179 cri.go:89] found id: "44120701377c86a13941cce86ade9f62a6acf1b52ff16f8ddb305e7f21f14bf4"
	I1115 09:57:29.730481  569179 cri.go:89] found id: "d5226f6ec3310e6db8de828f50b650f234c2d4352ca764002df22a8028216813"
	I1115 09:57:29.730485  569179 cri.go:89] found id: "c955341aff41e582eb4cf3e7968bbb7511c6d5aa6ccde02971fa779ea9ba7dcd"
	I1115 09:57:29.730488  569179 cri.go:89] found id: "97f13c5dcb417ee07f2a82877efd1a85d01e15f288397c88502ef56901503132"
	I1115 09:57:29.730492  569179 cri.go:89] found id: "eedd1774f1da1143522fb65556223529755ab408b4a76ab65da2a9a4dd980407"
	I1115 09:57:29.730496  569179 cri.go:89] found id: ""
	I1115 09:57:29.730571  569179 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:57:29.745926  569179 retry.go:31] will retry after 619.015307ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:57:29Z" level=error msg="open /run/runc: no such file or directory"
	I1115 09:57:30.365804  569179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:57:30.379581  569179 pause.go:52] kubelet running: false
	I1115 09:57:30.379655  569179 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 09:57:30.531713  569179 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 09:57:30.531806  569179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 09:57:30.616743  569179 cri.go:89] found id: "417a035d5497ae4550d197115037bf48f90f3f5569544d6634d6d1f36a76c43b"
	I1115 09:57:30.616777  569179 cri.go:89] found id: "4be3d6cc9c88485048b93b7f1eedbd5a0cd4cb1111e7f6c1f3469248da583895"
	I1115 09:57:30.616783  569179 cri.go:89] found id: "44120701377c86a13941cce86ade9f62a6acf1b52ff16f8ddb305e7f21f14bf4"
	I1115 09:57:30.616787  569179 cri.go:89] found id: "d5226f6ec3310e6db8de828f50b650f234c2d4352ca764002df22a8028216813"
	I1115 09:57:30.616791  569179 cri.go:89] found id: "c955341aff41e582eb4cf3e7968bbb7511c6d5aa6ccde02971fa779ea9ba7dcd"
	I1115 09:57:30.616795  569179 cri.go:89] found id: "97f13c5dcb417ee07f2a82877efd1a85d01e15f288397c88502ef56901503132"
	I1115 09:57:30.616799  569179 cri.go:89] found id: "eedd1774f1da1143522fb65556223529755ab408b4a76ab65da2a9a4dd980407"
	I1115 09:57:30.616810  569179 cri.go:89] found id: ""
	I1115 09:57:30.616861  569179 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 09:57:30.637793  569179 out.go:203] 
	W1115 09:57:30.639323  569179 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:57:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:57:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 09:57:30.639356  569179 out.go:285] * 
	* 
	W1115 09:57:30.645584  569179 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 09:57:30.646995  569179 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-717282 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-717282
helpers_test.go:243: (dbg) docker inspect pause-717282:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b787b335cc49c281d2c360fabd42ec7d107e10af58ed17fdab1964243a45260",
	        "Created": "2025-11-15T09:56:42.438255591Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 554936,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:56:42.476173272Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/8b787b335cc49c281d2c360fabd42ec7d107e10af58ed17fdab1964243a45260/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b787b335cc49c281d2c360fabd42ec7d107e10af58ed17fdab1964243a45260/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b787b335cc49c281d2c360fabd42ec7d107e10af58ed17fdab1964243a45260/hosts",
	        "LogPath": "/var/lib/docker/containers/8b787b335cc49c281d2c360fabd42ec7d107e10af58ed17fdab1964243a45260/8b787b335cc49c281d2c360fabd42ec7d107e10af58ed17fdab1964243a45260-json.log",
	        "Name": "/pause-717282",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-717282:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-717282",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b787b335cc49c281d2c360fabd42ec7d107e10af58ed17fdab1964243a45260",
	                "LowerDir": "/var/lib/docker/overlay2/258465a317f4a14dd5095667082118524f498b048de1f3bed6a1943fd1582b36-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/258465a317f4a14dd5095667082118524f498b048de1f3bed6a1943fd1582b36/merged",
	                "UpperDir": "/var/lib/docker/overlay2/258465a317f4a14dd5095667082118524f498b048de1f3bed6a1943fd1582b36/diff",
	                "WorkDir": "/var/lib/docker/overlay2/258465a317f4a14dd5095667082118524f498b048de1f3bed6a1943fd1582b36/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-717282",
	                "Source": "/var/lib/docker/volumes/pause-717282/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-717282",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-717282",
	                "name.minikube.sigs.k8s.io": "pause-717282",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dc789cce3a7fc45b8c365d38a9cb46767e647890bd41d455b11f3cf65719b21d",
	            "SandboxKey": "/var/run/docker/netns/dc789cce3a7f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-717282": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f85c46d43f71b2e95461473b3512768391d7a93502e36a761dcf0c0bb0049256",
	                    "EndpointID": "09ca4517700319869157584ba26b7c6dd54d3e8c51b40b809bc22f9535997386",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "f2:3f:c9:59:34:20",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-717282",
	                        "8b787b335cc4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-717282 -n pause-717282
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-717282 -n pause-717282: exit status 2 (322.526489ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-717282 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-717282 logs -n 25: (1.022341227s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-034018 sudo cat /etc/kubernetes/kubelet.conf                                                                │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo cat /var/lib/kubelet/config.yaml                                                                │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl status docker --all --full --no-pager                                                 │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl cat docker --no-pager                                                                 │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo cat /etc/docker/daemon.json                                                                     │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo docker system info                                                                              │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl status cri-docker --all --full --no-pager                                             │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl cat cri-docker --no-pager                                                             │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                        │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo cat /usr/lib/systemd/system/cri-docker.service                                                  │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo cri-dockerd --version                                                                           │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl status containerd --all --full --no-pager                                             │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl cat containerd --no-pager                                                             │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo cat /lib/systemd/system/containerd.service                                                      │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo cat /etc/containerd/config.toml                                                                 │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo containerd config dump                                                                          │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl status crio --all --full --no-pager                                                   │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl cat crio --no-pager                                                                   │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                         │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo crio config                                                                                     │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ delete  │ -p cilium-034018                                                                                                      │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ start   │ -p force-systemd-env-450177 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio            │ force-systemd-env-450177 │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ start   │ -p NoKubernetes-941483 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-941483      │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ start   │ -p pause-717282 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-717282             │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ pause   │ -p pause-717282 --alsologtostderr -v=5                                                                                │ pause-717282             │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:57:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:57:22.254221  567634 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:57:22.254549  567634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:57:22.254559  567634 out.go:374] Setting ErrFile to fd 2...
	I1115 09:57:22.254564  567634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:57:22.254806  567634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:57:22.255301  567634 out.go:368] Setting JSON to false
	I1115 09:57:22.256638  567634 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5983,"bootTime":1763194659,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:57:22.256752  567634 start.go:143] virtualization: kvm guest
	I1115 09:57:22.258914  567634 out.go:179] * [pause-717282] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:57:22.260367  567634 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:57:22.260411  567634 notify.go:221] Checking for updates...
	I1115 09:57:22.262954  567634 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:57:22.264305  567634 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:57:22.265585  567634 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 09:57:22.266928  567634 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:57:22.268297  567634 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:57:22.270210  567634 config.go:182] Loaded profile config "pause-717282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:57:22.270977  567634 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:57:22.300696  567634 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:57:22.300822  567634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:57:22.375423  567634 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-15 09:57:22.362651257 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:57:22.375624  567634 docker.go:319] overlay module found
	I1115 09:57:22.377676  567634 out.go:179] * Using the docker driver based on existing profile
	I1115 09:57:20.700539  566770 out.go:252] * Updating the running docker "NoKubernetes-941483" container ...
	I1115 09:57:20.700580  566770 machine.go:94] provisionDockerMachine start ...
	I1115 09:57:20.700678  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:20.721166  566770 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:20.721546  566770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33394 <nil> <nil>}
	I1115 09:57:20.721577  566770 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:57:20.855584  566770 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-941483
	
	I1115 09:57:20.855638  566770 ubuntu.go:182] provisioning hostname "NoKubernetes-941483"
	I1115 09:57:20.855802  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:20.876570  566770 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:20.876814  566770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33394 <nil> <nil>}
	I1115 09:57:20.876831  566770 main.go:143] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-941483 && echo "NoKubernetes-941483" | sudo tee /etc/hostname
	I1115 09:57:21.018304  566770 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-941483
	
	I1115 09:57:21.018383  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:21.038235  566770 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:21.038544  566770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33394 <nil> <nil>}
	I1115 09:57:21.038568  566770 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-941483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-941483/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-941483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:57:21.170753  566770 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:57:21.170789  566770 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:57:21.170814  566770 ubuntu.go:190] setting up certificates
	I1115 09:57:21.170827  566770 provision.go:84] configureAuth start
	I1115 09:57:21.170905  566770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-941483
	I1115 09:57:21.191482  566770 provision.go:143] copyHostCerts
	I1115 09:57:21.191523  566770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:57:21.191565  566770 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:57:21.191578  566770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:57:21.191652  566770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:57:21.191774  566770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:57:21.191803  566770 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:57:21.191813  566770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:57:21.191848  566770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:57:21.191920  566770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:57:21.191956  566770 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:57:21.191965  566770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:57:21.191994  566770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:57:21.192058  566770 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-941483 san=[127.0.0.1 192.168.85.2 NoKubernetes-941483 localhost minikube]
	I1115 09:57:21.369965  566770 provision.go:177] copyRemoteCerts
	I1115 09:57:21.370028  566770 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:57:21.370063  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:21.391538  566770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33394 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/NoKubernetes-941483/id_rsa Username:docker}
	I1115 09:57:21.489522  566770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 09:57:21.489594  566770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:57:21.510699  566770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 09:57:21.510792  566770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1115 09:57:21.529577  566770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 09:57:21.529646  566770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 09:57:21.550816  566770 provision.go:87] duration metric: took 379.969255ms to configureAuth
	I1115 09:57:21.550853  566770 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:57:21.551082  566770 config.go:182] Loaded profile config "NoKubernetes-941483": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1115 09:57:21.551225  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:21.572199  566770 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:21.572490  566770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33394 <nil> <nil>}
	I1115 09:57:21.572518  566770 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:57:21.850806  566770 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:57:21.850842  566770 machine.go:97] duration metric: took 1.150251312s to provisionDockerMachine
	I1115 09:57:21.850861  566770 start.go:293] postStartSetup for "NoKubernetes-941483" (driver="docker")
	I1115 09:57:21.850874  566770 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:57:21.850952  566770 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:57:21.851009  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:21.873738  566770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33394 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/NoKubernetes-941483/id_rsa Username:docker}
	I1115 09:57:21.972135  566770 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:57:21.976069  566770 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:57:21.976105  566770 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:57:21.976120  566770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:57:21.976177  566770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:57:21.976280  566770 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:57:21.976295  566770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /etc/ssl/certs/3590632.pem
	I1115 09:57:21.976436  566770 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:57:21.984456  566770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:57:22.003602  566770 start.go:296] duration metric: took 152.722902ms for postStartSetup
	I1115 09:57:22.003690  566770 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:57:22.003759  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:22.023417  566770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33394 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/NoKubernetes-941483/id_rsa Username:docker}
	I1115 09:57:22.120238  566770 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:57:22.126372  566770 fix.go:56] duration metric: took 1.447156514s for fixHost
	I1115 09:57:22.126427  566770 start.go:83] releasing machines lock for "NoKubernetes-941483", held for 1.447248965s
	I1115 09:57:22.126503  566770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-941483
	I1115 09:57:22.149503  566770 ssh_runner.go:195] Run: cat /version.json
	I1115 09:57:22.149562  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:22.149740  566770 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:57:22.149822  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:22.173128  566770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33394 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/NoKubernetes-941483/id_rsa Username:docker}
	I1115 09:57:22.173532  566770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33394 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/NoKubernetes-941483/id_rsa Username:docker}
	I1115 09:57:22.355058  566770 ssh_runner.go:195] Run: systemctl --version
	I1115 09:57:22.363576  566770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:57:22.380912  566770 out.go:179]   - Kubernetes: Stopping ...
	I1115 09:57:22.379098  567634 start.go:309] selected driver: docker
	I1115 09:57:22.379118  567634 start.go:930] validating driver "docker" against &{Name:pause-717282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-717282 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:57:22.379261  567634 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:57:22.379361  567634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:57:22.469140  567634 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-15 09:57:22.453500121 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:57:22.470079  567634 cni.go:84] Creating CNI manager for ""
	I1115 09:57:22.470194  567634 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:57:22.470249  567634 start.go:353] cluster config:
	{Name:pause-717282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-717282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:fals
e storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:57:22.472277  567634 out.go:179] * Starting "pause-717282" primary control-plane node in "pause-717282" cluster
	I1115 09:57:22.473556  567634 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:57:22.474916  567634 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:57:22.476164  567634 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:57:22.476219  567634 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 09:57:22.476229  567634 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:57:22.476235  567634 cache.go:65] Caching tarball of preloaded images
	I1115 09:57:22.476352  567634 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:57:22.476367  567634 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:57:22.476595  567634 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/config.json ...
	I1115 09:57:22.502310  567634 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:57:22.502335  567634 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:57:22.502359  567634 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:57:22.502408  567634 start.go:360] acquireMachinesLock for pause-717282: {Name:mk297e9b6cc9ee35d41615de1f5656e315b5bed1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:57:22.502476  567634 start.go:364] duration metric: took 43.66µs to acquireMachinesLock for "pause-717282"
	I1115 09:57:22.502504  567634 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:57:22.502512  567634 fix.go:54] fixHost starting: 
	I1115 09:57:22.502776  567634 cli_runner.go:164] Run: docker container inspect pause-717282 --format={{.State.Status}}
	I1115 09:57:22.528441  567634 fix.go:112] recreateIfNeeded on pause-717282: state=Running err=<nil>
	W1115 09:57:22.528473  567634 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:57:21.509863  564357 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:57:21.509888  564357 crio.go:433] Images already preloaded, skipping extraction
	I1115 09:57:21.509944  564357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:57:21.538639  564357 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:57:21.538665  564357 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:57:21.538676  564357 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1115 09:57:21.538795  564357 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-450177 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-450177 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:57:21.538884  564357 ssh_runner.go:195] Run: crio config
	I1115 09:57:21.592630  564357 cni.go:84] Creating CNI manager for ""
	I1115 09:57:21.592651  564357 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:57:21.592668  564357 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:57:21.592694  564357 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-450177 NodeName:force-systemd-env-450177 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:57:21.592817  564357 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-450177"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:57:21.592876  564357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:57:21.601912  564357 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:57:21.601986  564357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:57:21.609989  564357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1115 09:57:21.623992  564357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:57:21.639765  564357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1115 09:57:21.653374  564357 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1115 09:57:21.657254  564357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:57:21.667318  564357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:57:21.754538  564357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:57:21.790279  564357 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177 for IP: 192.168.94.2
	I1115 09:57:21.790304  564357 certs.go:195] generating shared ca certs ...
	I1115 09:57:21.790326  564357 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:21.790523  564357 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:57:21.790604  564357 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:57:21.790614  564357 certs.go:257] generating profile certs ...
	I1115 09:57:21.790684  564357 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/client.key
	I1115 09:57:21.790698  564357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/client.crt with IP's: []
	I1115 09:57:22.059238  564357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/client.crt ...
	I1115 09:57:22.059269  564357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/client.crt: {Name:mkbde0b0f8c1a9fe7e6fce750f107ff9e6a01051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:22.059473  564357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/client.key ...
	I1115 09:57:22.059493  564357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/client.key: {Name:mk81163104b3521e59fb634bd8615e494df2379d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:22.059617  564357 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.key.c24d0dd3
	I1115 09:57:22.059639  564357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.crt.c24d0dd3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1115 09:57:22.343451  564357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.crt.c24d0dd3 ...
	I1115 09:57:22.343488  564357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.crt.c24d0dd3: {Name:mk8b137a85cf4ec22194a161be811fc270ad9c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:22.343701  564357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.key.c24d0dd3 ...
	I1115 09:57:22.343727  564357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.key.c24d0dd3: {Name:mk4fa641ef06862c95d57c432b3ef781a51543e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:22.343908  564357 certs.go:382] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.crt.c24d0dd3 -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.crt
	I1115 09:57:22.344016  564357 certs.go:386] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.key.c24d0dd3 -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.key
	I1115 09:57:22.344105  564357 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.key
	I1115 09:57:22.344130  564357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.crt with IP's: []
	I1115 09:57:22.639345  564357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.crt ...
	I1115 09:57:22.639376  564357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.crt: {Name:mk7d0760b07b8c46cfde3caf5b66728675fb61f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:22.639571  564357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.key ...
	I1115 09:57:22.639589  564357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.key: {Name:mkebbf3c070020a54e0d1d4866e8b663bc0f6f41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:22.639677  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 09:57:22.639696  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 09:57:22.639709  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 09:57:22.639723  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 09:57:22.639735  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 09:57:22.639748  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 09:57:22.639760  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 09:57:22.639785  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 09:57:22.639835  564357 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:57:22.639868  564357 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:57:22.639878  564357 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:57:22.639902  564357 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:57:22.639928  564357 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:57:22.639948  564357 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:57:22.639986  564357 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:57:22.640011  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:57:22.640028  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem -> /usr/share/ca-certificates/359063.pem
	I1115 09:57:22.640040  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /usr/share/ca-certificates/3590632.pem
	I1115 09:57:22.640583  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:57:22.662556  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:57:22.681162  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:57:22.702476  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:57:22.721858  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1115 09:57:22.741686  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:57:22.761076  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:57:22.779451  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 09:57:22.797433  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:57:22.817146  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:57:22.835482  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:57:22.854679  564357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:57:22.868117  564357 ssh_runner.go:195] Run: openssl version
	I1115 09:57:22.874918  564357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:57:22.883788  564357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:57:22.887658  564357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:57:22.887716  564357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:57:22.926799  564357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:57:22.935850  564357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:57:22.944683  564357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:57:22.948593  564357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:57:22.948652  564357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:57:22.984143  564357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:57:22.993319  564357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:57:23.002641  564357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:57:23.007766  564357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:57:23.007832  564357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:57:23.045565  564357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:57:23.055057  564357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:57:23.058811  564357 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:57:23.058872  564357 kubeadm.go:401] StartCluster: {Name:force-systemd-env-450177 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-450177 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:57:23.058938  564357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:57:23.058979  564357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:57:23.086789  564357 cri.go:89] found id: ""
	I1115 09:57:23.086864  564357 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:57:23.095623  564357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:57:23.104062  564357 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 09:57:23.104134  564357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:57:23.112995  564357 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 09:57:23.113016  564357 kubeadm.go:158] found existing configuration files:
	
	I1115 09:57:23.113066  564357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 09:57:23.121320  564357 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 09:57:23.121408  564357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 09:57:23.129286  564357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 09:57:23.137375  564357 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 09:57:23.137467  564357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:57:23.145409  564357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 09:57:23.153435  564357 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 09:57:23.153525  564357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:57:23.161645  564357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 09:57:23.169427  564357 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 09:57:23.169487  564357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:57:23.177028  564357 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 09:57:23.219708  564357 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 09:57:23.219813  564357 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 09:57:23.262235  564357 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 09:57:23.262324  564357 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 09:57:23.262374  564357 kubeadm.go:319] OS: Linux
	I1115 09:57:23.262471  564357 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 09:57:23.262559  564357 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 09:57:23.262650  564357 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 09:57:23.262727  564357 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 09:57:23.262785  564357 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 09:57:23.262848  564357 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 09:57:23.262930  564357 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 09:57:23.263040  564357 kubeadm.go:319] CGROUPS_IO: enabled
	I1115 09:57:23.329050  564357 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 09:57:23.329218  564357 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 09:57:23.329364  564357 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 09:57:23.337345  564357 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 09:57:22.038332  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:57:22.038798  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:57:22.038863  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:57:22.038924  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:57:22.069452  539051 cri.go:89] found id: "83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e"
	I1115 09:57:22.069473  539051 cri.go:89] found id: ""
	I1115 09:57:22.069481  539051 logs.go:282] 1 containers: [83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e]
	I1115 09:57:22.069540  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:22.073825  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:57:22.073907  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:57:22.106822  539051 cri.go:89] found id: ""
	I1115 09:57:22.106856  539051 logs.go:282] 0 containers: []
	W1115 09:57:22.106867  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:57:22.106875  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:57:22.106938  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:57:22.141734  539051 cri.go:89] found id: ""
	I1115 09:57:22.141762  539051 logs.go:282] 0 containers: []
	W1115 09:57:22.141779  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:57:22.141787  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:57:22.141848  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:57:22.177946  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:57:22.177972  539051 cri.go:89] found id: ""
	I1115 09:57:22.177983  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:57:22.178043  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:22.183230  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:57:22.183297  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:57:22.219178  539051 cri.go:89] found id: ""
	I1115 09:57:22.219202  539051 logs.go:282] 0 containers: []
	W1115 09:57:22.219210  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:57:22.219216  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:57:22.219262  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:57:22.254429  539051 cri.go:89] found id: "ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad"
	I1115 09:57:22.254450  539051 cri.go:89] found id: ""
	I1115 09:57:22.254460  539051 logs.go:282] 1 containers: [ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad]
	I1115 09:57:22.254525  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:22.258879  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:57:22.258949  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:57:22.293820  539051 cri.go:89] found id: ""
	I1115 09:57:22.293847  539051 logs.go:282] 0 containers: []
	W1115 09:57:22.293857  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:57:22.293865  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:57:22.293924  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:57:22.329659  539051 cri.go:89] found id: ""
	I1115 09:57:22.329722  539051 logs.go:282] 0 containers: []
	W1115 09:57:22.329732  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:57:22.329746  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:57:22.329786  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:57:22.382973  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:57:22.382998  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:57:22.439001  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:57:22.439054  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:57:22.541514  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:57:22.541562  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:57:22.560875  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:57:22.560904  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:57:22.623938  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:57:22.623962  539051 logs.go:123] Gathering logs for kube-apiserver [83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e] ...
	I1115 09:57:22.623977  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e"
	I1115 09:57:22.659874  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:57:22.659911  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:57:22.708485  539051 logs.go:123] Gathering logs for kube-controller-manager [ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad] ...
	I1115 09:57:22.708518  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad"
	I1115 09:57:22.530680  567634 out.go:252] * Updating the running docker "pause-717282" container ...
	I1115 09:57:22.530716  567634 machine.go:94] provisionDockerMachine start ...
	I1115 09:57:22.530810  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:22.552974  567634 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:22.553302  567634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1115 09:57:22.553326  567634 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:57:22.690858  567634 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-717282
	
	I1115 09:57:22.690894  567634 ubuntu.go:182] provisioning hostname "pause-717282"
	I1115 09:57:22.690957  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:22.710737  567634 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:22.710990  567634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1115 09:57:22.711010  567634 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-717282 && echo "pause-717282" | sudo tee /etc/hostname
	I1115 09:57:22.851907  567634 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-717282
	
	I1115 09:57:22.852013  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:22.871328  567634 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:22.871698  567634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1115 09:57:22.871731  567634 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-717282' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-717282/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-717282' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:57:23.002361  567634 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:57:23.002403  567634 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:57:23.002447  567634 ubuntu.go:190] setting up certificates
	I1115 09:57:23.002462  567634 provision.go:84] configureAuth start
	I1115 09:57:23.002568  567634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-717282
	I1115 09:57:23.021514  567634 provision.go:143] copyHostCerts
	I1115 09:57:23.021598  567634 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:57:23.021618  567634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:57:23.021702  567634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:57:23.021851  567634 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:57:23.021865  567634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:57:23.021913  567634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:57:23.022013  567634 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:57:23.022023  567634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:57:23.022062  567634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:57:23.022154  567634 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.pause-717282 san=[127.0.0.1 192.168.103.2 localhost minikube pause-717282]
	I1115 09:57:23.186440  567634 provision.go:177] copyRemoteCerts
	I1115 09:57:23.186501  567634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:57:23.186542  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:23.207435  567634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/pause-717282/id_rsa Username:docker}
	I1115 09:57:23.308520  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 09:57:23.327647  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:57:23.347874  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:57:23.365764  567634 provision.go:87] duration metric: took 363.283285ms to configureAuth
	I1115 09:57:23.365797  567634 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:57:23.366054  567634 config.go:182] Loaded profile config "pause-717282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:57:23.366172  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:23.384847  567634 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:23.385122  567634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1115 09:57:23.385140  567634 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:57:23.669354  567634 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:57:23.669382  567634 machine.go:97] duration metric: took 1.138656638s to provisionDockerMachine
	I1115 09:57:23.669442  567634 start.go:293] postStartSetup for "pause-717282" (driver="docker")
	I1115 09:57:23.669458  567634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:57:23.669554  567634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:57:23.669613  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:23.690034  567634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/pause-717282/id_rsa Username:docker}
	I1115 09:57:23.785652  567634 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:57:23.789660  567634 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:57:23.789685  567634 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:57:23.789696  567634 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:57:23.789743  567634 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:57:23.789833  567634 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:57:23.789937  567634 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:57:23.798305  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:57:23.816410  567634 start.go:296] duration metric: took 146.932102ms for postStartSetup
	I1115 09:57:23.816505  567634 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:57:23.816578  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:23.835621  567634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/pause-717282/id_rsa Username:docker}
	I1115 09:57:23.926962  567634 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:57:23.932001  567634 fix.go:56] duration metric: took 1.42948264s for fixHost
	I1115 09:57:23.932029  567634 start.go:83] releasing machines lock for "pause-717282", held for 1.429542366s
	I1115 09:57:23.932091  567634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-717282
	I1115 09:57:23.951279  567634 ssh_runner.go:195] Run: cat /version.json
	I1115 09:57:23.951336  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:23.951373  567634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:57:23.951460  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:23.970061  567634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/pause-717282/id_rsa Username:docker}
	I1115 09:57:23.971165  567634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/pause-717282/id_rsa Username:docker}
	I1115 09:57:24.114710  567634 ssh_runner.go:195] Run: systemctl --version
	I1115 09:57:24.121745  567634 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:57:24.158632  567634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:57:24.163743  567634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:57:24.163932  567634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:57:24.172186  567634 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 09:57:24.172209  567634 start.go:496] detecting cgroup driver to use...
	I1115 09:57:24.172242  567634 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:57:24.172291  567634 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:57:24.188843  567634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:57:24.202084  567634 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:57:24.202155  567634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:57:24.218733  567634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:57:24.232455  567634 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:57:24.350529  567634 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:57:24.456021  567634 docker.go:234] disabling docker service ...
	I1115 09:57:24.456087  567634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:57:24.471065  567634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:57:24.484082  567634 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:57:24.595521  567634 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:57:24.711360  567634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:57:24.724511  567634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:57:24.739327  567634 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:57:24.739401  567634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:57:24.748648  567634 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:57:24.748733  567634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:57:24.758288  567634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:57:24.767559  567634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:57:24.776869  567634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:57:24.785176  567634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:57:24.795190  567634 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:57:24.805340  567634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:57:24.815620  567634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:57:24.824296  567634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:57:24.832194  567634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:57:24.940737  567634 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:57:25.090523  567634 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:57:25.090599  567634 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:57:25.094899  567634 start.go:564] Will wait 60s for crictl version
	I1115 09:57:25.094967  567634 ssh_runner.go:195] Run: which crictl
	I1115 09:57:25.098664  567634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:57:25.125758  567634 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:57:25.125847  567634 ssh_runner.go:195] Run: crio --version
	I1115 09:57:25.157795  567634 ssh_runner.go:195] Run: crio --version
	I1115 09:57:25.189926  567634 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:57:22.382275  566770 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
	I1115 09:57:22.415324  566770 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1115 09:57:22.415419  566770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:57:22.470792  566770 cri.go:89] found id: "de1daf1472f342f2754bcfb06aacfd2b598ba2492fff4af9ec4e50ea6e0c8072"
	I1115 09:57:22.470814  566770 cri.go:89] found id: "d2f119ee882dd5006a213db8343d85276aabc9b6ff688305cee7bb3add5cff14"
	I1115 09:57:22.470819  566770 cri.go:89] found id: "4fcd7ebe384c657ab29f5563dbde05723f4320fd6df7a83a5a40ee092da7e3cc"
	I1115 09:57:22.470823  566770 cri.go:89] found id: "31bdc6a30424bd7e8b09f570c670886a183e120c8e427f29c72e5cfff3d0a462"
	I1115 09:57:22.470828  566770 cri.go:89] found id: ""
	W1115 09:57:22.470838  566770 kubeadm.go:839] found 4 kube-system containers to stop
	I1115 09:57:22.470849  566770 cri.go:252] Stopping containers: [de1daf1472f342f2754bcfb06aacfd2b598ba2492fff4af9ec4e50ea6e0c8072 d2f119ee882dd5006a213db8343d85276aabc9b6ff688305cee7bb3add5cff14 4fcd7ebe384c657ab29f5563dbde05723f4320fd6df7a83a5a40ee092da7e3cc 31bdc6a30424bd7e8b09f570c670886a183e120c8e427f29c72e5cfff3d0a462]
	I1115 09:57:22.470908  566770 ssh_runner.go:195] Run: which crictl
	I1115 09:57:22.475660  566770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 de1daf1472f342f2754bcfb06aacfd2b598ba2492fff4af9ec4e50ea6e0c8072 d2f119ee882dd5006a213db8343d85276aabc9b6ff688305cee7bb3add5cff14 4fcd7ebe384c657ab29f5563dbde05723f4320fd6df7a83a5a40ee092da7e3cc 31bdc6a30424bd7e8b09f570c670886a183e120c8e427f29c72e5cfff3d0a462
	I1115 09:57:25.191216  567634 cli_runner.go:164] Run: docker network inspect pause-717282 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:57:25.209899  567634 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 09:57:25.214426  567634 kubeadm.go:884] updating cluster {Name:pause-717282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-717282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:57:25.214578  567634 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:57:25.214636  567634 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:57:25.249181  567634 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:57:25.249211  567634 crio.go:433] Images already preloaded, skipping extraction
	I1115 09:57:25.249265  567634 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:57:25.278888  567634 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:57:25.278911  567634 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:57:25.278921  567634 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 09:57:25.279045  567634 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-717282 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-717282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:57:25.279122  567634 ssh_runner.go:195] Run: crio config
	I1115 09:57:25.338091  567634 cni.go:84] Creating CNI manager for ""
	I1115 09:57:25.338116  567634 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:57:25.338138  567634 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:57:25.338167  567634 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-717282 NodeName:pause-717282 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:57:25.338321  567634 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-717282"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:57:25.338415  567634 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:57:25.348471  567634 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:57:25.348553  567634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:57:25.357334  567634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:57:25.372110  567634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:57:25.385703  567634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1115 09:57:25.400412  567634 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 09:57:25.405020  567634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:57:25.535965  567634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:57:25.552919  567634 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282 for IP: 192.168.103.2
	I1115 09:57:25.552942  567634 certs.go:195] generating shared ca certs ...
	I1115 09:57:25.552963  567634 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:25.553131  567634 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:57:25.553184  567634 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:57:25.553197  567634 certs.go:257] generating profile certs ...
	I1115 09:57:25.553384  567634 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/client.key
	I1115 09:57:25.553484  567634 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/apiserver.key.6e55ec4b
	I1115 09:57:25.553530  567634 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/proxy-client.key
	I1115 09:57:25.553669  567634 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:57:25.553709  567634 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:57:25.553722  567634 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:57:25.553760  567634 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:57:25.553797  567634 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:57:25.553826  567634 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:57:25.553879  567634 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:57:25.554739  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:57:25.575806  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:57:25.594883  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:57:25.614735  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:57:25.632337  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 09:57:25.653462  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:57:25.673839  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:57:25.695327  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 09:57:25.716113  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:57:25.735691  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:57:25.753883  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:57:25.772284  567634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:57:25.785358  567634 ssh_runner.go:195] Run: openssl version
	I1115 09:57:25.792717  567634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:57:25.801827  567634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:57:25.805879  567634 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:57:25.805943  567634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:57:25.843107  567634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:57:25.852805  567634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:57:25.861810  567634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:57:25.865592  567634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:57:25.865647  567634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:57:25.899910  567634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:57:25.908744  567634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:57:25.919160  567634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:57:25.923319  567634 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:57:25.923378  567634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:57:25.958571  567634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:57:25.967570  567634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:57:25.971648  567634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 09:57:26.007577  567634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 09:57:26.043237  567634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 09:57:26.078340  567634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 09:57:26.114246  567634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 09:57:26.148697  567634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 09:57:26.185467  567634 kubeadm.go:401] StartCluster: {Name:pause-717282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-717282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:57:26.185620  567634 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:57:26.185707  567634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:57:26.215220  567634 cri.go:89] found id: "417a035d5497ae4550d197115037bf48f90f3f5569544d6634d6d1f36a76c43b"
	I1115 09:57:26.215244  567634 cri.go:89] found id: "4be3d6cc9c88485048b93b7f1eedbd5a0cd4cb1111e7f6c1f3469248da583895"
	I1115 09:57:26.215266  567634 cri.go:89] found id: "44120701377c86a13941cce86ade9f62a6acf1b52ff16f8ddb305e7f21f14bf4"
	I1115 09:57:26.215271  567634 cri.go:89] found id: "d5226f6ec3310e6db8de828f50b650f234c2d4352ca764002df22a8028216813"
	I1115 09:57:26.215280  567634 cri.go:89] found id: "c955341aff41e582eb4cf3e7968bbb7511c6d5aa6ccde02971fa779ea9ba7dcd"
	I1115 09:57:26.215284  567634 cri.go:89] found id: "97f13c5dcb417ee07f2a82877efd1a85d01e15f288397c88502ef56901503132"
	I1115 09:57:26.215287  567634 cri.go:89] found id: "eedd1774f1da1143522fb65556223529755ab408b4a76ab65da2a9a4dd980407"
	I1115 09:57:26.215289  567634 cri.go:89] found id: ""
	I1115 09:57:26.215329  567634 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 09:57:26.228020  567634 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:57:26Z" level=error msg="open /run/runc: no such file or directory"
	I1115 09:57:26.228085  567634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:57:26.236306  567634 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 09:57:26.236325  567634 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 09:57:26.236362  567634 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 09:57:26.244190  567634 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:57:26.244939  567634 kubeconfig.go:125] found "pause-717282" server: "https://192.168.103.2:8443"
	I1115 09:57:26.245848  567634 kapi.go:59] client config for pause-717282: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 09:57:26.246255  567634 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 09:57:26.246269  567634 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 09:57:26.246274  567634 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 09:57:26.246279  567634 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 09:57:26.246283  567634 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 09:57:26.246672  567634 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 09:57:26.254427  567634 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1115 09:57:26.254456  567634 kubeadm.go:602] duration metric: took 18.126221ms to restartPrimaryControlPlane
	I1115 09:57:26.254465  567634 kubeadm.go:403] duration metric: took 69.011622ms to StartCluster
	I1115 09:57:26.254479  567634 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:26.254543  567634 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:57:26.256018  567634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:26.256315  567634 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:57:26.256379  567634 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 09:57:26.256629  567634 config.go:182] Loaded profile config "pause-717282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:57:26.258520  567634 out.go:179] * Enabled addons: 
	I1115 09:57:26.258520  567634 out.go:179] * Verifying Kubernetes components...
	I1115 09:57:23.340162  564357 out.go:252]   - Generating certificates and keys ...
	I1115 09:57:23.340257  564357 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 09:57:23.340343  564357 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 09:57:23.520321  564357 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 09:57:23.790623  564357 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 09:57:24.170953  564357 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 09:57:24.255787  564357 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 09:57:24.414335  564357 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 09:57:24.414528  564357 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-450177 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1115 09:57:24.706223  564357 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 09:57:24.706368  564357 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-450177 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1115 09:57:24.765525  564357 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 09:57:25.057873  564357 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 09:57:25.160821  564357 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 09:57:25.160999  564357 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 09:57:25.734866  564357 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 09:57:26.259705  567634 addons.go:515] duration metric: took 3.339832ms for enable addons: enabled=[]
	I1115 09:57:26.259732  567634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:57:26.377251  567634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:57:26.391549  567634 node_ready.go:35] waiting up to 6m0s for node "pause-717282" to be "Ready" ...
	I1115 09:57:26.399606  567634 node_ready.go:49] node "pause-717282" is "Ready"
	I1115 09:57:26.399637  567634 node_ready.go:38] duration metric: took 8.051637ms for node "pause-717282" to be "Ready" ...
	I1115 09:57:26.399653  567634 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:57:26.399705  567634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:57:26.411712  567634 api_server.go:72] duration metric: took 155.3604ms to wait for apiserver process to appear ...
	I1115 09:57:26.411741  567634 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:57:26.411761  567634 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 09:57:26.416788  567634 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 09:57:26.417758  567634 api_server.go:141] control plane version: v1.34.1
	I1115 09:57:26.417785  567634 api_server.go:131] duration metric: took 6.036453ms to wait for apiserver health ...
	I1115 09:57:26.417796  567634 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:57:26.420830  567634 system_pods.go:59] 7 kube-system pods found
	I1115 09:57:26.420858  567634 system_pods.go:61] "coredns-66bc5c9577-8rvls" [dbd58b44-1d9a-428a-ab72-1c53e2329819] Running
	I1115 09:57:26.420866  567634 system_pods.go:61] "etcd-pause-717282" [f97712bd-4d6b-40f1-8135-aa322569a888] Running
	I1115 09:57:26.420872  567634 system_pods.go:61] "kindnet-mgc2d" [9d5715ed-b45b-4e26-b01e-11cc5c70b606] Running
	I1115 09:57:26.420878  567634 system_pods.go:61] "kube-apiserver-pause-717282" [952ff006-8bdb-41a9-bc42-322899d2bd04] Running
	I1115 09:57:26.420886  567634 system_pods.go:61] "kube-controller-manager-pause-717282" [df3b40b6-6294-4f5b-85e7-5c9192e05877] Running
	I1115 09:57:26.420892  567634 system_pods.go:61] "kube-proxy-f24b6" [c143796c-42fe-4540-9d67-1c46241d2e12] Running
	I1115 09:57:26.420901  567634 system_pods.go:61] "kube-scheduler-pause-717282" [a3430311-abf1-4802-8ce4-eea1311967b6] Running
	I1115 09:57:26.420907  567634 system_pods.go:74] duration metric: took 3.10461ms to wait for pod list to return data ...
	I1115 09:57:26.420919  567634 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:57:26.422836  567634 default_sa.go:45] found service account: "default"
	I1115 09:57:26.422859  567634 default_sa.go:55] duration metric: took 1.930301ms for default service account to be created ...
	I1115 09:57:26.422868  567634 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:57:26.425187  567634 system_pods.go:86] 7 kube-system pods found
	I1115 09:57:26.425210  567634 system_pods.go:89] "coredns-66bc5c9577-8rvls" [dbd58b44-1d9a-428a-ab72-1c53e2329819] Running
	I1115 09:57:26.425217  567634 system_pods.go:89] "etcd-pause-717282" [f97712bd-4d6b-40f1-8135-aa322569a888] Running
	I1115 09:57:26.425222  567634 system_pods.go:89] "kindnet-mgc2d" [9d5715ed-b45b-4e26-b01e-11cc5c70b606] Running
	I1115 09:57:26.425227  567634 system_pods.go:89] "kube-apiserver-pause-717282" [952ff006-8bdb-41a9-bc42-322899d2bd04] Running
	I1115 09:57:26.425236  567634 system_pods.go:89] "kube-controller-manager-pause-717282" [df3b40b6-6294-4f5b-85e7-5c9192e05877] Running
	I1115 09:57:26.425241  567634 system_pods.go:89] "kube-proxy-f24b6" [c143796c-42fe-4540-9d67-1c46241d2e12] Running
	I1115 09:57:26.425246  567634 system_pods.go:89] "kube-scheduler-pause-717282" [a3430311-abf1-4802-8ce4-eea1311967b6] Running
	I1115 09:57:26.425259  567634 system_pods.go:126] duration metric: took 2.380039ms to wait for k8s-apps to be running ...
	I1115 09:57:26.425270  567634 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:57:26.425319  567634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:57:26.439569  567634 system_svc.go:56] duration metric: took 14.28634ms WaitForService to wait for kubelet
	I1115 09:57:26.439603  567634 kubeadm.go:587] duration metric: took 183.257061ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:57:26.439627  567634 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:57:26.442559  567634 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:57:26.442589  567634 node_conditions.go:123] node cpu capacity is 8
	I1115 09:57:26.442602  567634 node_conditions.go:105] duration metric: took 2.968447ms to run NodePressure ...
	I1115 09:57:26.442619  567634 start.go:242] waiting for startup goroutines ...
	I1115 09:57:26.442629  567634 start.go:247] waiting for cluster config update ...
	I1115 09:57:26.442642  567634 start.go:256] writing updated cluster config ...
	I1115 09:57:26.443012  567634 ssh_runner.go:195] Run: rm -f paused
	I1115 09:57:26.447130  567634 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:57:26.447872  567634 kapi.go:59] client config for pause-717282: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 09:57:26.450908  567634 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8rvls" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.454942  567634 pod_ready.go:94] pod "coredns-66bc5c9577-8rvls" is "Ready"
	I1115 09:57:26.454964  567634 pod_ready.go:86] duration metric: took 4.035957ms for pod "coredns-66bc5c9577-8rvls" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.456860  567634 pod_ready.go:83] waiting for pod "etcd-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.460471  567634 pod_ready.go:94] pod "etcd-pause-717282" is "Ready"
	I1115 09:57:26.460496  567634 pod_ready.go:86] duration metric: took 3.614532ms for pod "etcd-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.462379  567634 pod_ready.go:83] waiting for pod "kube-apiserver-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.465764  567634 pod_ready.go:94] pod "kube-apiserver-pause-717282" is "Ready"
	I1115 09:57:26.465784  567634 pod_ready.go:86] duration metric: took 3.378099ms for pod "kube-apiserver-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.467843  567634 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.851460  567634 pod_ready.go:94] pod "kube-controller-manager-pause-717282" is "Ready"
	I1115 09:57:26.851492  567634 pod_ready.go:86] duration metric: took 383.622366ms for pod "kube-controller-manager-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:27.051955  567634 pod_ready.go:83] waiting for pod "kube-proxy-f24b6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.690648  564357 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 09:57:26.788219  564357 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 09:57:26.857641  564357 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 09:57:27.322961  564357 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 09:57:27.323385  564357 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 09:57:27.327337  564357 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 09:57:27.452348  567634 pod_ready.go:94] pod "kube-proxy-f24b6" is "Ready"
	I1115 09:57:27.452378  567634 pod_ready.go:86] duration metric: took 400.396269ms for pod "kube-proxy-f24b6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:27.651629  567634 pod_ready.go:83] waiting for pod "kube-scheduler-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:28.051990  567634 pod_ready.go:94] pod "kube-scheduler-pause-717282" is "Ready"
	I1115 09:57:28.052016  567634 pod_ready.go:86] duration metric: took 400.362435ms for pod "kube-scheduler-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:28.052027  567634 pod_ready.go:40] duration metric: took 1.60484476s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:57:28.097677  567634 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 09:57:28.099546  567634 out.go:179] * Done! kubectl is now configured to use "pause-717282" cluster and "default" namespace by default
	I1115 09:57:25.236907  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:57:25.237487  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:57:25.237567  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:57:25.237631  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:57:25.271220  539051 cri.go:89] found id: "83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e"
	I1115 09:57:25.271248  539051 cri.go:89] found id: ""
	I1115 09:57:25.271259  539051 logs.go:282] 1 containers: [83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e]
	I1115 09:57:25.271321  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:25.276555  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:57:25.276620  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:57:25.306797  539051 cri.go:89] found id: ""
	I1115 09:57:25.306825  539051 logs.go:282] 0 containers: []
	W1115 09:57:25.306833  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:57:25.306839  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:57:25.306886  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:57:25.335340  539051 cri.go:89] found id: ""
	I1115 09:57:25.335367  539051 logs.go:282] 0 containers: []
	W1115 09:57:25.335376  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:57:25.335382  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:57:25.335453  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:57:25.365633  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:57:25.365659  539051 cri.go:89] found id: ""
	I1115 09:57:25.365671  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:57:25.365737  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:25.370024  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:57:25.370099  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:57:25.400166  539051 cri.go:89] found id: ""
	I1115 09:57:25.400195  539051 logs.go:282] 0 containers: []
	W1115 09:57:25.400205  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:57:25.400214  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:57:25.400271  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:57:25.432024  539051 cri.go:89] found id: "ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad"
	I1115 09:57:25.432045  539051 cri.go:89] found id: ""
	I1115 09:57:25.432053  539051 logs.go:282] 1 containers: [ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad]
	I1115 09:57:25.432168  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:25.436894  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:57:25.436971  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:57:25.470230  539051 cri.go:89] found id: ""
	I1115 09:57:25.470256  539051 logs.go:282] 0 containers: []
	W1115 09:57:25.470265  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:57:25.470273  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:57:25.470333  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:57:25.498849  539051 cri.go:89] found id: ""
	I1115 09:57:25.498874  539051 logs.go:282] 0 containers: []
	W1115 09:57:25.498881  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:57:25.498891  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:57:25.498901  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:57:25.578543  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:57:25.578571  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:57:25.595823  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:57:25.595852  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:57:25.656795  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:57:25.656820  539051 logs.go:123] Gathering logs for kube-apiserver [83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e] ...
	I1115 09:57:25.656836  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e"
	I1115 09:57:25.692282  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:57:25.692317  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:57:25.743541  539051 logs.go:123] Gathering logs for kube-controller-manager [ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad] ...
	I1115 09:57:25.743573  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad"
	I1115 09:57:25.772726  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:57:25.772756  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:57:25.817257  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:57:25.817302  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:57:28.349774  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:57:28.350176  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:57:28.350228  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:57:28.350277  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:57:28.377280  539051 cri.go:89] found id: "83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e"
	I1115 09:57:28.377302  539051 cri.go:89] found id: ""
	I1115 09:57:28.377319  539051 logs.go:282] 1 containers: [83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e]
	I1115 09:57:28.377371  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:28.381274  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:57:28.381342  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:57:28.411805  539051 cri.go:89] found id: ""
	I1115 09:57:28.411836  539051 logs.go:282] 0 containers: []
	W1115 09:57:28.411846  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:57:28.411854  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:57:28.411914  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:57:28.445530  539051 cri.go:89] found id: ""
	I1115 09:57:28.445560  539051 logs.go:282] 0 containers: []
	W1115 09:57:28.445570  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:57:28.445578  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:57:28.445639  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:57:28.474647  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:57:28.474665  539051 cri.go:89] found id: ""
	I1115 09:57:28.474674  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:57:28.474727  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:28.479068  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:57:28.479133  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:57:28.504485  539051 cri.go:89] found id: ""
	I1115 09:57:28.504516  539051 logs.go:282] 0 containers: []
	W1115 09:57:28.504527  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:57:28.504536  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:57:28.504608  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:57:28.531644  539051 cri.go:89] found id: "ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad"
	I1115 09:57:28.531679  539051 cri.go:89] found id: ""
	I1115 09:57:28.531690  539051 logs.go:282] 1 containers: [ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad]
	I1115 09:57:28.531748  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:28.535718  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:57:28.535796  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:57:28.564027  539051 cri.go:89] found id: ""
	I1115 09:57:28.564052  539051 logs.go:282] 0 containers: []
	W1115 09:57:28.564062  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:57:28.564071  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:57:28.564134  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:57:28.592339  539051 cri.go:89] found id: ""
	I1115 09:57:28.592365  539051 logs.go:282] 0 containers: []
	W1115 09:57:28.592374  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:57:28.592386  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:57:28.592434  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:57:28.669368  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:57:28.669416  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:57:28.686518  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:57:28.686546  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:57:28.753380  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:57:28.753422  539051 logs.go:123] Gathering logs for kube-apiserver [83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e] ...
	I1115 09:57:28.753438  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e"
	I1115 09:57:28.790597  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:57:28.790633  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:57:28.847787  539051 logs.go:123] Gathering logs for kube-controller-manager [ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad] ...
	I1115 09:57:28.847821  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad"
	I1115 09:57:28.884258  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:57:28.884295  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:57:28.935033  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:57:28.935074  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.034986787Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.035820823Z" level=info msg="Conmon does support the --sync option"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.035847624Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.035866841Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.036664837Z" level=info msg="Conmon does support the --sync option"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.036685062Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.040757891Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.040785651Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.04132919Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hook
s.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_m
appings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.041767426Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.041841412Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.047530461Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.085853629Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-8rvls Namespace:kube-system ID:2c036cd4313f3f5e49d949b26899c98ae2ba195e2c7fe9c74d7a87942be390f0 UID:dbd58b44-1d9a-428a-ab72-1c53e2329819 NetNS:/var/run/netns/d524d275-e9c4-4089-88bc-aa3912ffde82 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00051e080}] Aliases:map[]}"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086020858Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-8rvls for CNI network kindnet (type=ptp)"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086406268Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086432965Z" level=info msg="Starting seccomp notifier watcher"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086493819Z" level=info msg="Create NRI interface"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086609842Z" level=info msg="built-in NRI default validator is disabled"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086625684Z" level=info msg="runtime interface created"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.08663903Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086646478Z" level=info msg="runtime interface starting up..."
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086653131Z" level=info msg="starting plugins..."
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086666264Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086952724Z" level=info msg="No systemd watchdog enabled"
	Nov 15 09:57:25 pause-717282 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	417a035d5497a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   0                   2c036cd4313f3       coredns-66bc5c9577-8rvls               kube-system
	4be3d6cc9c884       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   22 seconds ago      Running             kube-proxy                0                   f4d1cad4cd95d       kube-proxy-f24b6                       kube-system
	44120701377c8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   22 seconds ago      Running             kindnet-cni               0                   6179d40af3013       kindnet-mgc2d                          kube-system
	d5226f6ec3310       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   32 seconds ago      Running             kube-apiserver            0                   9d3b9a42b0202       kube-apiserver-pause-717282            kube-system
	c955341aff41e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   32 seconds ago      Running             etcd                      0                   c0136c282e2df       etcd-pause-717282                      kube-system
	97f13c5dcb417       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   32 seconds ago      Running             kube-scheduler            0                   78378de54f868       kube-scheduler-pause-717282            kube-system
	eedd1774f1da1       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   32 seconds ago      Running             kube-controller-manager   0                   f0d58fe430009       kube-controller-manager-pause-717282   kube-system
	
	
	==> coredns [417a035d5497ae4550d197115037bf48f90f3f5569544d6634d6d1f36a76c43b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46635 - 32877 "HINFO IN 5147139433601398208.3392774817633581474. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022565606s
	
	
	==> describe nodes <==
	Name:               pause-717282
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-717282
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=pause-717282
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_57_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:57:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-717282
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:57:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:57:23 +0000   Sat, 15 Nov 2025 09:57:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:57:23 +0000   Sat, 15 Nov 2025 09:57:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:57:23 +0000   Sat, 15 Nov 2025 09:57:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:57:23 +0000   Sat, 15 Nov 2025 09:57:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-717282
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                8bbaf300-aac3-4695-b688-b4a05ec169cb
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-8rvls                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-pause-717282                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-mgc2d                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-pause-717282             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-pause-717282    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-f24b6                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-pause-717282             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 28s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s   kubelet          Node pause-717282 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s   kubelet          Node pause-717282 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s   kubelet          Node pause-717282 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node pause-717282 event: Registered Node pause-717282 in Controller
	  Normal  NodeReady                12s   kubelet          Node pause-717282 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [c955341aff41e582eb4cf3e7968bbb7511c6d5aa6ccde02971fa779ea9ba7dcd] <==
	{"level":"warn","ts":"2025-11-15T09:56:59.778285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.789871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.803382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.815594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.825387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.835594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.844603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.854903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.863618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.873589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.899540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.908276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.915803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.923951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.932523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.947068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.954456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.962030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.981344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.984982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:57:00.007921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:57:00.014878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:57:00.022941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:57:00.088649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56956","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T09:57:15.436720Z","caller":"traceutil/trace.go:172","msg":"trace[1087238230] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"132.652909ms","start":"2025-11-15T09:57:15.304047Z","end":"2025-11-15T09:57:15.436699Z","steps":["trace[1087238230] 'process raft request'  (duration: 132.516629ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:57:31 up  1:39,  0 user,  load average: 4.39, 2.41, 1.55
	Linux pause-717282 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [44120701377c86a13941cce86ade9f62a6acf1b52ff16f8ddb305e7f21f14bf4] <==
	I1115 09:57:09.299601       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 09:57:09.299878       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1115 09:57:09.300050       1 main.go:148] setting mtu 1500 for CNI 
	I1115 09:57:09.300069       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 09:57:09.300091       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T09:57:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 09:57:09.556002       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 09:57:09.556052       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 09:57:09.556070       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 09:57:09.556250       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 09:57:09.756378       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 09:57:09.756687       1 metrics.go:72] Registering metrics
	I1115 09:57:09.756777       1 controller.go:711] "Syncing nftables rules"
	I1115 09:57:19.500489       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 09:57:19.500567       1 main.go:301] handling current node
	I1115 09:57:29.502496       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 09:57:29.502532       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d5226f6ec3310e6db8de828f50b650f234c2d4352ca764002df22a8028216813] <==
	E1115 09:57:00.668594       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1115 09:57:00.716313       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 09:57:00.720611       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 09:57:00.720689       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 09:57:00.727460       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 09:57:00.727985       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 09:57:00.813346       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:57:01.519433       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 09:57:01.523013       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 09:57:01.523026       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 09:57:02.025593       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 09:57:02.065102       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 09:57:02.122224       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 09:57:02.128458       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1115 09:57:02.129612       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 09:57:02.133619       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 09:57:02.533727       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 09:57:03.406080       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 09:57:03.415646       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 09:57:03.422905       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 09:57:07.790022       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 09:57:07.794794       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 09:57:08.537554       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 09:57:08.636805       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1115 09:57:08.636805       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [eedd1774f1da1143522fb65556223529755ab408b4a76ab65da2a9a4dd980407] <==
	I1115 09:57:07.533185       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 09:57:07.534363       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 09:57:07.534479       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 09:57:07.534497       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 09:57:07.534543       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 09:57:07.534631       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 09:57:07.534681       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 09:57:07.534690       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 09:57:07.535306       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 09:57:07.535660       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 09:57:07.535787       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 09:57:07.537487       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 09:57:07.538607       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:57:07.538693       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 09:57:07.539082       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 09:57:07.539236       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 09:57:07.539292       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 09:57:07.539303       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 09:57:07.539312       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 09:57:07.544822       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 09:57:07.546432       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-717282" podCIDRs=["10.244.0.0/24"]
	I1115 09:57:07.550680       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 09:57:07.556992       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 09:57:07.566499       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 09:57:22.487188       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4be3d6cc9c88485048b93b7f1eedbd5a0cd4cb1111e7f6c1f3469248da583895] <==
	I1115 09:57:09.155334       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:57:09.244242       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:57:09.344714       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:57:09.344795       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1115 09:57:09.344913       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:57:09.363795       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:57:09.363854       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:57:09.369221       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:57:09.369703       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:57:09.369741       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:57:09.371842       1 config.go:200] "Starting service config controller"
	I1115 09:57:09.371862       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:57:09.371871       1 config.go:309] "Starting node config controller"
	I1115 09:57:09.371890       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:57:09.371893       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:57:09.371896       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:57:09.371900       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:57:09.371881       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:57:09.371909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:57:09.472472       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 09:57:09.472472       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:57:09.472513       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [97f13c5dcb417ee07f2a82877efd1a85d01e15f288397c88502ef56901503132] <==
	E1115 09:57:00.568075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:57:00.568145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 09:57:00.568175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:57:00.568183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:57:00.568224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:57:00.568267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:57:00.568273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:57:00.568293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:57:00.568327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:57:00.568324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:57:00.568383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:57:00.568447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 09:57:00.568530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:57:00.568535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:57:01.379116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:57:01.516790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:57:01.575709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:57:01.597863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:57:01.607910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:57:01.645366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 09:57:01.663774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:57:01.747109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:57:01.839775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:57:01.856018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1115 09:57:04.166475       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 09:57:04 pause-717282 kubelet[1317]: I1115 09:57:04.331219    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-717282" podStartSLOduration=1.331197437 podStartE2EDuration="1.331197437s" podCreationTimestamp="2025-11-15 09:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:57:04.330884864 +0000 UTC m=+1.158286821" watchObservedRunningTime="2025-11-15 09:57:04.331197437 +0000 UTC m=+1.158599390"
	Nov 15 09:57:04 pause-717282 kubelet[1317]: I1115 09:57:04.331362    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-717282" podStartSLOduration=1.331353935 podStartE2EDuration="1.331353935s" podCreationTimestamp="2025-11-15 09:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:57:04.320923294 +0000 UTC m=+1.148325248" watchObservedRunningTime="2025-11-15 09:57:04.331353935 +0000 UTC m=+1.158755881"
	Nov 15 09:57:04 pause-717282 kubelet[1317]: I1115 09:57:04.359075    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-717282" podStartSLOduration=1.359052127 podStartE2EDuration="1.359052127s" podCreationTimestamp="2025-11-15 09:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:57:04.359035267 +0000 UTC m=+1.186437216" watchObservedRunningTime="2025-11-15 09:57:04.359052127 +0000 UTC m=+1.186454071"
	Nov 15 09:57:04 pause-717282 kubelet[1317]: I1115 09:57:04.359255    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-717282" podStartSLOduration=1.359244852 podStartE2EDuration="1.359244852s" podCreationTimestamp="2025-11-15 09:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:57:04.342793735 +0000 UTC m=+1.170195686" watchObservedRunningTime="2025-11-15 09:57:04.359244852 +0000 UTC m=+1.186646805"
	Nov 15 09:57:07 pause-717282 kubelet[1317]: I1115 09:57:07.590058    1317 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 15 09:57:07 pause-717282 kubelet[1317]: I1115 09:57:07.591617    1317 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682639    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d5715ed-b45b-4e26-b01e-11cc5c70b606-lib-modules\") pod \"kindnet-mgc2d\" (UID: \"9d5715ed-b45b-4e26-b01e-11cc5c70b606\") " pod="kube-system/kindnet-mgc2d"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682692    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8ws\" (UniqueName: \"kubernetes.io/projected/9d5715ed-b45b-4e26-b01e-11cc5c70b606-kube-api-access-6h8ws\") pod \"kindnet-mgc2d\" (UID: \"9d5715ed-b45b-4e26-b01e-11cc5c70b606\") " pod="kube-system/kindnet-mgc2d"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682737    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c143796c-42fe-4540-9d67-1c46241d2e12-kube-proxy\") pod \"kube-proxy-f24b6\" (UID: \"c143796c-42fe-4540-9d67-1c46241d2e12\") " pod="kube-system/kube-proxy-f24b6"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682763    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c143796c-42fe-4540-9d67-1c46241d2e12-xtables-lock\") pod \"kube-proxy-f24b6\" (UID: \"c143796c-42fe-4540-9d67-1c46241d2e12\") " pod="kube-system/kube-proxy-f24b6"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682787    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pr96\" (UniqueName: \"kubernetes.io/projected/c143796c-42fe-4540-9d67-1c46241d2e12-kube-api-access-7pr96\") pod \"kube-proxy-f24b6\" (UID: \"c143796c-42fe-4540-9d67-1c46241d2e12\") " pod="kube-system/kube-proxy-f24b6"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682811    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9d5715ed-b45b-4e26-b01e-11cc5c70b606-cni-cfg\") pod \"kindnet-mgc2d\" (UID: \"9d5715ed-b45b-4e26-b01e-11cc5c70b606\") " pod="kube-system/kindnet-mgc2d"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682838    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d5715ed-b45b-4e26-b01e-11cc5c70b606-xtables-lock\") pod \"kindnet-mgc2d\" (UID: \"9d5715ed-b45b-4e26-b01e-11cc5c70b606\") " pod="kube-system/kindnet-mgc2d"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682890    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c143796c-42fe-4540-9d67-1c46241d2e12-lib-modules\") pod \"kube-proxy-f24b6\" (UID: \"c143796c-42fe-4540-9d67-1c46241d2e12\") " pod="kube-system/kube-proxy-f24b6"
	Nov 15 09:57:09 pause-717282 kubelet[1317]: I1115 09:57:09.332517    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-mgc2d" podStartSLOduration=1.332493792 podStartE2EDuration="1.332493792s" podCreationTimestamp="2025-11-15 09:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:57:09.332274706 +0000 UTC m=+6.159676661" watchObservedRunningTime="2025-11-15 09:57:09.332493792 +0000 UTC m=+6.159895745"
	Nov 15 09:57:09 pause-717282 kubelet[1317]: I1115 09:57:09.332678    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f24b6" podStartSLOduration=1.332664802 podStartE2EDuration="1.332664802s" podCreationTimestamp="2025-11-15 09:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:57:09.320014358 +0000 UTC m=+6.147416310" watchObservedRunningTime="2025-11-15 09:57:09.332664802 +0000 UTC m=+6.160066758"
	Nov 15 09:57:19 pause-717282 kubelet[1317]: I1115 09:57:19.771435    1317 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 09:57:19 pause-717282 kubelet[1317]: I1115 09:57:19.864933    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbd58b44-1d9a-428a-ab72-1c53e2329819-config-volume\") pod \"coredns-66bc5c9577-8rvls\" (UID: \"dbd58b44-1d9a-428a-ab72-1c53e2329819\") " pod="kube-system/coredns-66bc5c9577-8rvls"
	Nov 15 09:57:19 pause-717282 kubelet[1317]: I1115 09:57:19.864982    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zfb\" (UniqueName: \"kubernetes.io/projected/dbd58b44-1d9a-428a-ab72-1c53e2329819-kube-api-access-r5zfb\") pod \"coredns-66bc5c9577-8rvls\" (UID: \"dbd58b44-1d9a-428a-ab72-1c53e2329819\") " pod="kube-system/coredns-66bc5c9577-8rvls"
	Nov 15 09:57:20 pause-717282 kubelet[1317]: I1115 09:57:20.350249    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8rvls" podStartSLOduration=12.350225965 podStartE2EDuration="12.350225965s" podCreationTimestamp="2025-11-15 09:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:57:20.349252829 +0000 UTC m=+17.176654778" watchObservedRunningTime="2025-11-15 09:57:20.350225965 +0000 UTC m=+17.177627917"
	Nov 15 09:57:28 pause-717282 kubelet[1317]: I1115 09:57:28.521783    1317 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 15 09:57:28 pause-717282 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 09:57:28 pause-717282 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 09:57:28 pause-717282 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 15 09:57:28 pause-717282 systemd[1]: kubelet.service: Consumed 1.177s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-717282 -n pause-717282
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-717282 -n pause-717282: exit status 2 (339.805374ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-717282 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-717282
helpers_test.go:243: (dbg) docker inspect pause-717282:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b787b335cc49c281d2c360fabd42ec7d107e10af58ed17fdab1964243a45260",
	        "Created": "2025-11-15T09:56:42.438255591Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 554936,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:56:42.476173272Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/8b787b335cc49c281d2c360fabd42ec7d107e10af58ed17fdab1964243a45260/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b787b335cc49c281d2c360fabd42ec7d107e10af58ed17fdab1964243a45260/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b787b335cc49c281d2c360fabd42ec7d107e10af58ed17fdab1964243a45260/hosts",
	        "LogPath": "/var/lib/docker/containers/8b787b335cc49c281d2c360fabd42ec7d107e10af58ed17fdab1964243a45260/8b787b335cc49c281d2c360fabd42ec7d107e10af58ed17fdab1964243a45260-json.log",
	        "Name": "/pause-717282",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-717282:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-717282",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b787b335cc49c281d2c360fabd42ec7d107e10af58ed17fdab1964243a45260",
	                "LowerDir": "/var/lib/docker/overlay2/258465a317f4a14dd5095667082118524f498b048de1f3bed6a1943fd1582b36-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/258465a317f4a14dd5095667082118524f498b048de1f3bed6a1943fd1582b36/merged",
	                "UpperDir": "/var/lib/docker/overlay2/258465a317f4a14dd5095667082118524f498b048de1f3bed6a1943fd1582b36/diff",
	                "WorkDir": "/var/lib/docker/overlay2/258465a317f4a14dd5095667082118524f498b048de1f3bed6a1943fd1582b36/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-717282",
	                "Source": "/var/lib/docker/volumes/pause-717282/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-717282",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-717282",
	                "name.minikube.sigs.k8s.io": "pause-717282",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dc789cce3a7fc45b8c365d38a9cb46767e647890bd41d455b11f3cf65719b21d",
	            "SandboxKey": "/var/run/docker/netns/dc789cce3a7f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-717282": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f85c46d43f71b2e95461473b3512768391d7a93502e36a761dcf0c0bb0049256",
	                    "EndpointID": "09ca4517700319869157584ba26b7c6dd54d3e8c51b40b809bc22f9535997386",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "f2:3f:c9:59:34:20",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-717282",
	                        "8b787b335cc4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-717282 -n pause-717282
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-717282 -n pause-717282: exit status 2 (339.962247ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-717282 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-034018 sudo cat /etc/kubernetes/kubelet.conf                                                                │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo cat /var/lib/kubelet/config.yaml                                                                │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl status docker --all --full --no-pager                                                 │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl cat docker --no-pager                                                                 │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo cat /etc/docker/daemon.json                                                                     │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo docker system info                                                                              │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl status cri-docker --all --full --no-pager                                             │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl cat cri-docker --no-pager                                                             │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                        │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo cat /usr/lib/systemd/system/cri-docker.service                                                  │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo cri-dockerd --version                                                                           │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl status containerd --all --full --no-pager                                             │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl cat containerd --no-pager                                                             │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo cat /lib/systemd/system/containerd.service                                                      │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo cat /etc/containerd/config.toml                                                                 │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo containerd config dump                                                                          │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl status crio --all --full --no-pager                                                   │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo systemctl cat crio --no-pager                                                                   │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                         │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-034018 sudo crio config                                                                                     │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ delete  │ -p cilium-034018                                                                                                      │ cilium-034018            │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ start   │ -p force-systemd-env-450177 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio            │ force-systemd-env-450177 │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ start   │ -p NoKubernetes-941483 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-941483      │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ start   │ -p pause-717282 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-717282             │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ pause   │ -p pause-717282 --alsologtostderr -v=5                                                                                │ pause-717282             │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:57:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:57:22.254221  567634 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:57:22.254549  567634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:57:22.254559  567634 out.go:374] Setting ErrFile to fd 2...
	I1115 09:57:22.254564  567634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:57:22.254806  567634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:57:22.255301  567634 out.go:368] Setting JSON to false
	I1115 09:57:22.256638  567634 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5983,"bootTime":1763194659,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:57:22.256752  567634 start.go:143] virtualization: kvm guest
	I1115 09:57:22.258914  567634 out.go:179] * [pause-717282] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:57:22.260367  567634 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:57:22.260411  567634 notify.go:221] Checking for updates...
	I1115 09:57:22.262954  567634 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:57:22.264305  567634 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:57:22.265585  567634 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 09:57:22.266928  567634 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:57:22.268297  567634 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:57:22.270210  567634 config.go:182] Loaded profile config "pause-717282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:57:22.270977  567634 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:57:22.300696  567634 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:57:22.300822  567634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:57:22.375423  567634 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-15 09:57:22.362651257 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:57:22.375624  567634 docker.go:319] overlay module found
	I1115 09:57:22.377676  567634 out.go:179] * Using the docker driver based on existing profile
	I1115 09:57:20.700539  566770 out.go:252] * Updating the running docker "NoKubernetes-941483" container ...
	I1115 09:57:20.700580  566770 machine.go:94] provisionDockerMachine start ...
	I1115 09:57:20.700678  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:20.721166  566770 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:20.721546  566770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33394 <nil> <nil>}
	I1115 09:57:20.721577  566770 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:57:20.855584  566770 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-941483
	
	I1115 09:57:20.855638  566770 ubuntu.go:182] provisioning hostname "NoKubernetes-941483"
	I1115 09:57:20.855802  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:20.876570  566770 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:20.876814  566770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33394 <nil> <nil>}
	I1115 09:57:20.876831  566770 main.go:143] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-941483 && echo "NoKubernetes-941483" | sudo tee /etc/hostname
	I1115 09:57:21.018304  566770 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-941483
	
	I1115 09:57:21.018383  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:21.038235  566770 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:21.038544  566770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33394 <nil> <nil>}
	I1115 09:57:21.038568  566770 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-941483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-941483/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-941483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:57:21.170753  566770 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:57:21.170789  566770 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:57:21.170814  566770 ubuntu.go:190] setting up certificates
	I1115 09:57:21.170827  566770 provision.go:84] configureAuth start
	I1115 09:57:21.170905  566770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-941483
	I1115 09:57:21.191482  566770 provision.go:143] copyHostCerts
	I1115 09:57:21.191523  566770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:57:21.191565  566770 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:57:21.191578  566770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:57:21.191652  566770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:57:21.191774  566770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:57:21.191803  566770 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:57:21.191813  566770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:57:21.191848  566770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:57:21.191920  566770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:57:21.191956  566770 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:57:21.191965  566770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:57:21.191994  566770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:57:21.192058  566770 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-941483 san=[127.0.0.1 192.168.85.2 NoKubernetes-941483 localhost minikube]
	I1115 09:57:21.369965  566770 provision.go:177] copyRemoteCerts
	I1115 09:57:21.370028  566770 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:57:21.370063  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:21.391538  566770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33394 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/NoKubernetes-941483/id_rsa Username:docker}
	I1115 09:57:21.489522  566770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1115 09:57:21.489594  566770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:57:21.510699  566770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1115 09:57:21.510792  566770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1115 09:57:21.529577  566770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1115 09:57:21.529646  566770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 09:57:21.550816  566770 provision.go:87] duration metric: took 379.969255ms to configureAuth
	I1115 09:57:21.550853  566770 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:57:21.551082  566770 config.go:182] Loaded profile config "NoKubernetes-941483": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1115 09:57:21.551225  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:21.572199  566770 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:21.572490  566770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33394 <nil> <nil>}
	I1115 09:57:21.572518  566770 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:57:21.850806  566770 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:57:21.850842  566770 machine.go:97] duration metric: took 1.150251312s to provisionDockerMachine
	I1115 09:57:21.850861  566770 start.go:293] postStartSetup for "NoKubernetes-941483" (driver="docker")
	I1115 09:57:21.850874  566770 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:57:21.850952  566770 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:57:21.851009  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:21.873738  566770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33394 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/NoKubernetes-941483/id_rsa Username:docker}
	I1115 09:57:21.972135  566770 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:57:21.976069  566770 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:57:21.976105  566770 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:57:21.976120  566770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:57:21.976177  566770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:57:21.976280  566770 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:57:21.976295  566770 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /etc/ssl/certs/3590632.pem
	I1115 09:57:21.976436  566770 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:57:21.984456  566770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:57:22.003602  566770 start.go:296] duration metric: took 152.722902ms for postStartSetup
	I1115 09:57:22.003690  566770 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:57:22.003759  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:22.023417  566770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33394 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/NoKubernetes-941483/id_rsa Username:docker}
	I1115 09:57:22.120238  566770 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:57:22.126372  566770 fix.go:56] duration metric: took 1.447156514s for fixHost
	I1115 09:57:22.126427  566770 start.go:83] releasing machines lock for "NoKubernetes-941483", held for 1.447248965s
	I1115 09:57:22.126503  566770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-941483
	I1115 09:57:22.149503  566770 ssh_runner.go:195] Run: cat /version.json
	I1115 09:57:22.149562  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:22.149740  566770 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:57:22.149822  566770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-941483
	I1115 09:57:22.173128  566770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33394 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/NoKubernetes-941483/id_rsa Username:docker}
	I1115 09:57:22.173532  566770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33394 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/NoKubernetes-941483/id_rsa Username:docker}
	I1115 09:57:22.355058  566770 ssh_runner.go:195] Run: systemctl --version
	I1115 09:57:22.363576  566770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:57:22.380912  566770 out.go:179]   - Kubernetes: Stopping ...
	I1115 09:57:22.379098  567634 start.go:309] selected driver: docker
	I1115 09:57:22.379118  567634 start.go:930] validating driver "docker" against &{Name:pause-717282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-717282 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:57:22.379261  567634 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:57:22.379361  567634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:57:22.469140  567634 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-15 09:57:22.453500121 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:57:22.470079  567634 cni.go:84] Creating CNI manager for ""
	I1115 09:57:22.470194  567634 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:57:22.470249  567634 start.go:353] cluster config:
	{Name:pause-717282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-717282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:fals
e storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:57:22.472277  567634 out.go:179] * Starting "pause-717282" primary control-plane node in "pause-717282" cluster
	I1115 09:57:22.473556  567634 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:57:22.474916  567634 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:57:22.476164  567634 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:57:22.476219  567634 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 09:57:22.476229  567634 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:57:22.476235  567634 cache.go:65] Caching tarball of preloaded images
	I1115 09:57:22.476352  567634 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 09:57:22.476367  567634 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 09:57:22.476595  567634 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/config.json ...
	I1115 09:57:22.502310  567634 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:57:22.502335  567634 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:57:22.502359  567634 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:57:22.502408  567634 start.go:360] acquireMachinesLock for pause-717282: {Name:mk297e9b6cc9ee35d41615de1f5656e315b5bed1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:57:22.502476  567634 start.go:364] duration metric: took 43.66µs to acquireMachinesLock for "pause-717282"
	I1115 09:57:22.502504  567634 start.go:96] Skipping create...Using existing machine configuration
	I1115 09:57:22.502512  567634 fix.go:54] fixHost starting: 
	I1115 09:57:22.502776  567634 cli_runner.go:164] Run: docker container inspect pause-717282 --format={{.State.Status}}
	I1115 09:57:22.528441  567634 fix.go:112] recreateIfNeeded on pause-717282: state=Running err=<nil>
	W1115 09:57:22.528473  567634 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 09:57:21.509863  564357 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:57:21.509888  564357 crio.go:433] Images already preloaded, skipping extraction
	I1115 09:57:21.509944  564357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:57:21.538639  564357 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:57:21.538665  564357 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:57:21.538676  564357 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1115 09:57:21.538795  564357 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-450177 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-450177 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:57:21.538884  564357 ssh_runner.go:195] Run: crio config
	I1115 09:57:21.592630  564357 cni.go:84] Creating CNI manager for ""
	I1115 09:57:21.592651  564357 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:57:21.592668  564357 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:57:21.592694  564357 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-450177 NodeName:force-systemd-env-450177 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:57:21.592817  564357 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-450177"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:57:21.592876  564357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:57:21.601912  564357 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:57:21.601986  564357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:57:21.609989  564357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1115 09:57:21.623992  564357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:57:21.639765  564357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1115 09:57:21.653374  564357 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1115 09:57:21.657254  564357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:57:21.667318  564357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:57:21.754538  564357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:57:21.790279  564357 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177 for IP: 192.168.94.2
	I1115 09:57:21.790304  564357 certs.go:195] generating shared ca certs ...
	I1115 09:57:21.790326  564357 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:21.790523  564357 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:57:21.790604  564357 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:57:21.790614  564357 certs.go:257] generating profile certs ...
	I1115 09:57:21.790684  564357 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/client.key
	I1115 09:57:21.790698  564357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/client.crt with IP's: []
	I1115 09:57:22.059238  564357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/client.crt ...
	I1115 09:57:22.059269  564357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/client.crt: {Name:mkbde0b0f8c1a9fe7e6fce750f107ff9e6a01051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:22.059473  564357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/client.key ...
	I1115 09:57:22.059493  564357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/client.key: {Name:mk81163104b3521e59fb634bd8615e494df2379d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:22.059617  564357 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.key.c24d0dd3
	I1115 09:57:22.059639  564357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.crt.c24d0dd3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1115 09:57:22.343451  564357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.crt.c24d0dd3 ...
	I1115 09:57:22.343488  564357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.crt.c24d0dd3: {Name:mk8b137a85cf4ec22194a161be811fc270ad9c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:22.343701  564357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.key.c24d0dd3 ...
	I1115 09:57:22.343727  564357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.key.c24d0dd3: {Name:mk4fa641ef06862c95d57c432b3ef781a51543e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:22.343908  564357 certs.go:382] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.crt.c24d0dd3 -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.crt
	I1115 09:57:22.344016  564357 certs.go:386] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.key.c24d0dd3 -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.key
	I1115 09:57:22.344105  564357 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.key
	I1115 09:57:22.344130  564357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.crt with IP's: []
	I1115 09:57:22.639345  564357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.crt ...
	I1115 09:57:22.639376  564357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.crt: {Name:mk7d0760b07b8c46cfde3caf5b66728675fb61f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:22.639571  564357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.key ...
	I1115 09:57:22.639589  564357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.key: {Name:mkebbf3c070020a54e0d1d4866e8b663bc0f6f41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:22.639677  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1115 09:57:22.639696  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1115 09:57:22.639709  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1115 09:57:22.639723  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1115 09:57:22.639735  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1115 09:57:22.639748  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1115 09:57:22.639760  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1115 09:57:22.639785  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1115 09:57:22.639835  564357 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:57:22.639868  564357 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:57:22.639878  564357 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:57:22.639902  564357 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:57:22.639928  564357 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:57:22.639948  564357 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:57:22.639986  564357 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:57:22.640011  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:57:22.640028  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem -> /usr/share/ca-certificates/359063.pem
	I1115 09:57:22.640040  564357 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> /usr/share/ca-certificates/3590632.pem
	I1115 09:57:22.640583  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:57:22.662556  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:57:22.681162  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:57:22.702476  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:57:22.721858  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1115 09:57:22.741686  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:57:22.761076  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:57:22.779451  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/force-systemd-env-450177/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 09:57:22.797433  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:57:22.817146  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:57:22.835482  564357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:57:22.854679  564357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:57:22.868117  564357 ssh_runner.go:195] Run: openssl version
	I1115 09:57:22.874918  564357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:57:22.883788  564357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:57:22.887658  564357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:57:22.887716  564357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:57:22.926799  564357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:57:22.935850  564357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:57:22.944683  564357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:57:22.948593  564357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:57:22.948652  564357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:57:22.984143  564357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:57:22.993319  564357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:57:23.002641  564357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:57:23.007766  564357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:57:23.007832  564357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:57:23.045565  564357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:57:23.055057  564357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:57:23.058811  564357 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:57:23.058872  564357 kubeadm.go:401] StartCluster: {Name:force-systemd-env-450177 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-450177 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:57:23.058938  564357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:57:23.058979  564357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:57:23.086789  564357 cri.go:89] found id: ""
	I1115 09:57:23.086864  564357 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:57:23.095623  564357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:57:23.104062  564357 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 09:57:23.104134  564357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:57:23.112995  564357 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 09:57:23.113016  564357 kubeadm.go:158] found existing configuration files:
	
	I1115 09:57:23.113066  564357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 09:57:23.121320  564357 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 09:57:23.121408  564357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 09:57:23.129286  564357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 09:57:23.137375  564357 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 09:57:23.137467  564357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:57:23.145409  564357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 09:57:23.153435  564357 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 09:57:23.153525  564357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:57:23.161645  564357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 09:57:23.169427  564357 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 09:57:23.169487  564357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:57:23.177028  564357 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 09:57:23.219708  564357 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 09:57:23.219813  564357 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 09:57:23.262235  564357 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 09:57:23.262324  564357 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 09:57:23.262374  564357 kubeadm.go:319] OS: Linux
	I1115 09:57:23.262471  564357 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 09:57:23.262559  564357 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 09:57:23.262650  564357 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 09:57:23.262727  564357 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 09:57:23.262785  564357 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 09:57:23.262848  564357 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 09:57:23.262930  564357 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 09:57:23.263040  564357 kubeadm.go:319] CGROUPS_IO: enabled
	I1115 09:57:23.329050  564357 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 09:57:23.329218  564357 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 09:57:23.329364  564357 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 09:57:23.337345  564357 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 09:57:22.038332  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:57:22.038798  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:57:22.038863  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:57:22.038924  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:57:22.069452  539051 cri.go:89] found id: "83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e"
	I1115 09:57:22.069473  539051 cri.go:89] found id: ""
	I1115 09:57:22.069481  539051 logs.go:282] 1 containers: [83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e]
	I1115 09:57:22.069540  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:22.073825  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:57:22.073907  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:57:22.106822  539051 cri.go:89] found id: ""
	I1115 09:57:22.106856  539051 logs.go:282] 0 containers: []
	W1115 09:57:22.106867  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:57:22.106875  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:57:22.106938  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:57:22.141734  539051 cri.go:89] found id: ""
	I1115 09:57:22.141762  539051 logs.go:282] 0 containers: []
	W1115 09:57:22.141779  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:57:22.141787  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:57:22.141848  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:57:22.177946  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:57:22.177972  539051 cri.go:89] found id: ""
	I1115 09:57:22.177983  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:57:22.178043  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:22.183230  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:57:22.183297  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:57:22.219178  539051 cri.go:89] found id: ""
	I1115 09:57:22.219202  539051 logs.go:282] 0 containers: []
	W1115 09:57:22.219210  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:57:22.219216  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:57:22.219262  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:57:22.254429  539051 cri.go:89] found id: "ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad"
	I1115 09:57:22.254450  539051 cri.go:89] found id: ""
	I1115 09:57:22.254460  539051 logs.go:282] 1 containers: [ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad]
	I1115 09:57:22.254525  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:22.258879  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:57:22.258949  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:57:22.293820  539051 cri.go:89] found id: ""
	I1115 09:57:22.293847  539051 logs.go:282] 0 containers: []
	W1115 09:57:22.293857  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:57:22.293865  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:57:22.293924  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:57:22.329659  539051 cri.go:89] found id: ""
	I1115 09:57:22.329722  539051 logs.go:282] 0 containers: []
	W1115 09:57:22.329732  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:57:22.329746  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:57:22.329786  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:57:22.382973  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:57:22.382998  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:57:22.439001  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:57:22.439054  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:57:22.541514  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:57:22.541562  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:57:22.560875  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:57:22.560904  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:57:22.623938  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:57:22.623962  539051 logs.go:123] Gathering logs for kube-apiserver [83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e] ...
	I1115 09:57:22.623977  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e"
	I1115 09:57:22.659874  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:57:22.659911  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:57:22.708485  539051 logs.go:123] Gathering logs for kube-controller-manager [ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad] ...
	I1115 09:57:22.708518  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad"
	I1115 09:57:22.530680  567634 out.go:252] * Updating the running docker "pause-717282" container ...
	I1115 09:57:22.530716  567634 machine.go:94] provisionDockerMachine start ...
	I1115 09:57:22.530810  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:22.552974  567634 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:22.553302  567634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1115 09:57:22.553326  567634 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:57:22.690858  567634 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-717282
	
	I1115 09:57:22.690894  567634 ubuntu.go:182] provisioning hostname "pause-717282"
	I1115 09:57:22.690957  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:22.710737  567634 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:22.710990  567634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1115 09:57:22.711010  567634 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-717282 && echo "pause-717282" | sudo tee /etc/hostname
	I1115 09:57:22.851907  567634 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-717282
	
	I1115 09:57:22.852013  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:22.871328  567634 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:22.871698  567634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1115 09:57:22.871731  567634 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-717282' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-717282/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-717282' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:57:23.002361  567634 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:57:23.002403  567634 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:57:23.002447  567634 ubuntu.go:190] setting up certificates
	I1115 09:57:23.002462  567634 provision.go:84] configureAuth start
	I1115 09:57:23.002568  567634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-717282
	I1115 09:57:23.021514  567634 provision.go:143] copyHostCerts
	I1115 09:57:23.021598  567634 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:57:23.021618  567634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:57:23.021702  567634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:57:23.021851  567634 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:57:23.021865  567634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:57:23.021913  567634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:57:23.022013  567634 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:57:23.022023  567634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:57:23.022062  567634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:57:23.022154  567634 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.pause-717282 san=[127.0.0.1 192.168.103.2 localhost minikube pause-717282]
	I1115 09:57:23.186440  567634 provision.go:177] copyRemoteCerts
	I1115 09:57:23.186501  567634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:57:23.186542  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:23.207435  567634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/pause-717282/id_rsa Username:docker}
	I1115 09:57:23.308520  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 09:57:23.327647  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:57:23.347874  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1115 09:57:23.365764  567634 provision.go:87] duration metric: took 363.283285ms to configureAuth
	I1115 09:57:23.365797  567634 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:57:23.366054  567634 config.go:182] Loaded profile config "pause-717282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:57:23.366172  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:23.384847  567634 main.go:143] libmachine: Using SSH client type: native
	I1115 09:57:23.385122  567634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1115 09:57:23.385140  567634 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:57:23.669354  567634 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:57:23.669382  567634 machine.go:97] duration metric: took 1.138656638s to provisionDockerMachine
	I1115 09:57:23.669442  567634 start.go:293] postStartSetup for "pause-717282" (driver="docker")
	I1115 09:57:23.669458  567634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:57:23.669554  567634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:57:23.669613  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:23.690034  567634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/pause-717282/id_rsa Username:docker}
	I1115 09:57:23.785652  567634 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:57:23.789660  567634 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:57:23.789685  567634 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:57:23.789696  567634 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:57:23.789743  567634 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:57:23.789833  567634 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:57:23.789937  567634 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:57:23.798305  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:57:23.816410  567634 start.go:296] duration metric: took 146.932102ms for postStartSetup
	I1115 09:57:23.816505  567634 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:57:23.816578  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:23.835621  567634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/pause-717282/id_rsa Username:docker}
	I1115 09:57:23.926962  567634 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:57:23.932001  567634 fix.go:56] duration metric: took 1.42948264s for fixHost
	I1115 09:57:23.932029  567634 start.go:83] releasing machines lock for "pause-717282", held for 1.429542366s
	I1115 09:57:23.932091  567634 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-717282
	I1115 09:57:23.951279  567634 ssh_runner.go:195] Run: cat /version.json
	I1115 09:57:23.951336  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:23.951373  567634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:57:23.951460  567634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-717282
	I1115 09:57:23.970061  567634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/pause-717282/id_rsa Username:docker}
	I1115 09:57:23.971165  567634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/pause-717282/id_rsa Username:docker}
	I1115 09:57:24.114710  567634 ssh_runner.go:195] Run: systemctl --version
	I1115 09:57:24.121745  567634 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:57:24.158632  567634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:57:24.163743  567634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:57:24.163932  567634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:57:24.172186  567634 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 09:57:24.172209  567634 start.go:496] detecting cgroup driver to use...
	I1115 09:57:24.172242  567634 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:57:24.172291  567634 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:57:24.188843  567634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:57:24.202084  567634 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:57:24.202155  567634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:57:24.218733  567634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:57:24.232455  567634 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:57:24.350529  567634 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:57:24.456021  567634 docker.go:234] disabling docker service ...
	I1115 09:57:24.456087  567634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:57:24.471065  567634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:57:24.484082  567634 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:57:24.595521  567634 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:57:24.711360  567634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:57:24.724511  567634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:57:24.739327  567634 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:57:24.739401  567634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:57:24.748648  567634 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:57:24.748733  567634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:57:24.758288  567634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:57:24.767559  567634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:57:24.776869  567634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:57:24.785176  567634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:57:24.795190  567634 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:57:24.805340  567634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:57:24.815620  567634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:57:24.824296  567634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:57:24.832194  567634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:57:24.940737  567634 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:57:25.090523  567634 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:57:25.090599  567634 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:57:25.094899  567634 start.go:564] Will wait 60s for crictl version
	I1115 09:57:25.094967  567634 ssh_runner.go:195] Run: which crictl
	I1115 09:57:25.098664  567634 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:57:25.125758  567634 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:57:25.125847  567634 ssh_runner.go:195] Run: crio --version
	I1115 09:57:25.157795  567634 ssh_runner.go:195] Run: crio --version
	I1115 09:57:25.189926  567634 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:57:22.382275  566770 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
	I1115 09:57:22.415324  566770 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1115 09:57:22.415419  566770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:57:22.470792  566770 cri.go:89] found id: "de1daf1472f342f2754bcfb06aacfd2b598ba2492fff4af9ec4e50ea6e0c8072"
	I1115 09:57:22.470814  566770 cri.go:89] found id: "d2f119ee882dd5006a213db8343d85276aabc9b6ff688305cee7bb3add5cff14"
	I1115 09:57:22.470819  566770 cri.go:89] found id: "4fcd7ebe384c657ab29f5563dbde05723f4320fd6df7a83a5a40ee092da7e3cc"
	I1115 09:57:22.470823  566770 cri.go:89] found id: "31bdc6a30424bd7e8b09f570c670886a183e120c8e427f29c72e5cfff3d0a462"
	I1115 09:57:22.470828  566770 cri.go:89] found id: ""
	W1115 09:57:22.470838  566770 kubeadm.go:839] found 4 kube-system containers to stop
	I1115 09:57:22.470849  566770 cri.go:252] Stopping containers: [de1daf1472f342f2754bcfb06aacfd2b598ba2492fff4af9ec4e50ea6e0c8072 d2f119ee882dd5006a213db8343d85276aabc9b6ff688305cee7bb3add5cff14 4fcd7ebe384c657ab29f5563dbde05723f4320fd6df7a83a5a40ee092da7e3cc 31bdc6a30424bd7e8b09f570c670886a183e120c8e427f29c72e5cfff3d0a462]
	I1115 09:57:22.470908  566770 ssh_runner.go:195] Run: which crictl
	I1115 09:57:22.475660  566770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 de1daf1472f342f2754bcfb06aacfd2b598ba2492fff4af9ec4e50ea6e0c8072 d2f119ee882dd5006a213db8343d85276aabc9b6ff688305cee7bb3add5cff14 4fcd7ebe384c657ab29f5563dbde05723f4320fd6df7a83a5a40ee092da7e3cc 31bdc6a30424bd7e8b09f570c670886a183e120c8e427f29c72e5cfff3d0a462
	I1115 09:57:25.191216  567634 cli_runner.go:164] Run: docker network inspect pause-717282 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:57:25.209899  567634 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 09:57:25.214426  567634 kubeadm.go:884] updating cluster {Name:pause-717282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-717282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:57:25.214578  567634 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:57:25.214636  567634 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:57:25.249181  567634 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:57:25.249211  567634 crio.go:433] Images already preloaded, skipping extraction
	I1115 09:57:25.249265  567634 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:57:25.278888  567634 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 09:57:25.278911  567634 cache_images.go:86] Images are preloaded, skipping loading
	I1115 09:57:25.278921  567634 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 09:57:25.279045  567634 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-717282 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-717282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:57:25.279122  567634 ssh_runner.go:195] Run: crio config
	I1115 09:57:25.338091  567634 cni.go:84] Creating CNI manager for ""
	I1115 09:57:25.338116  567634 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:57:25.338138  567634 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:57:25.338167  567634 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-717282 NodeName:pause-717282 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:57:25.338321  567634 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-717282"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:57:25.338415  567634 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:57:25.348471  567634 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 09:57:25.348553  567634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:57:25.357334  567634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1115 09:57:25.372110  567634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:57:25.385703  567634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1115 09:57:25.400412  567634 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 09:57:25.405020  567634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:57:25.535965  567634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:57:25.552919  567634 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282 for IP: 192.168.103.2
	I1115 09:57:25.552942  567634 certs.go:195] generating shared ca certs ...
	I1115 09:57:25.552963  567634 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:25.553131  567634 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:57:25.553184  567634 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:57:25.553197  567634 certs.go:257] generating profile certs ...
	I1115 09:57:25.553384  567634 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/client.key
	I1115 09:57:25.553484  567634 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/apiserver.key.6e55ec4b
	I1115 09:57:25.553530  567634 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/proxy-client.key
	I1115 09:57:25.553669  567634 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:57:25.553709  567634 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:57:25.553722  567634 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:57:25.553760  567634 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:57:25.553797  567634 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:57:25.553826  567634 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:57:25.553879  567634 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:57:25.554739  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:57:25.575806  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:57:25.594883  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:57:25.614735  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:57:25.632337  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 09:57:25.653462  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:57:25.673839  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:57:25.695327  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 09:57:25.716113  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:57:25.735691  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:57:25.753883  567634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:57:25.772284  567634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:57:25.785358  567634 ssh_runner.go:195] Run: openssl version
	I1115 09:57:25.792717  567634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:57:25.801827  567634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:57:25.805879  567634 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:57:25.805943  567634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:57:25.843107  567634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:57:25.852805  567634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:57:25.861810  567634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:57:25.865592  567634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:57:25.865647  567634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:57:25.899910  567634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:57:25.908744  567634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:57:25.919160  567634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:57:25.923319  567634 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:57:25.923378  567634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:57:25.958571  567634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:57:25.967570  567634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:57:25.971648  567634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 09:57:26.007577  567634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 09:57:26.043237  567634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 09:57:26.078340  567634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 09:57:26.114246  567634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 09:57:26.148697  567634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 09:57:26.185467  567634 kubeadm.go:401] StartCluster: {Name:pause-717282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-717282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:57:26.185620  567634 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:57:26.185707  567634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:57:26.215220  567634 cri.go:89] found id: "417a035d5497ae4550d197115037bf48f90f3f5569544d6634d6d1f36a76c43b"
	I1115 09:57:26.215244  567634 cri.go:89] found id: "4be3d6cc9c88485048b93b7f1eedbd5a0cd4cb1111e7f6c1f3469248da583895"
	I1115 09:57:26.215266  567634 cri.go:89] found id: "44120701377c86a13941cce86ade9f62a6acf1b52ff16f8ddb305e7f21f14bf4"
	I1115 09:57:26.215271  567634 cri.go:89] found id: "d5226f6ec3310e6db8de828f50b650f234c2d4352ca764002df22a8028216813"
	I1115 09:57:26.215280  567634 cri.go:89] found id: "c955341aff41e582eb4cf3e7968bbb7511c6d5aa6ccde02971fa779ea9ba7dcd"
	I1115 09:57:26.215284  567634 cri.go:89] found id: "97f13c5dcb417ee07f2a82877efd1a85d01e15f288397c88502ef56901503132"
	I1115 09:57:26.215287  567634 cri.go:89] found id: "eedd1774f1da1143522fb65556223529755ab408b4a76ab65da2a9a4dd980407"
	I1115 09:57:26.215289  567634 cri.go:89] found id: ""
	I1115 09:57:26.215329  567634 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 09:57:26.228020  567634 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:57:26Z" level=error msg="open /run/runc: no such file or directory"
	I1115 09:57:26.228085  567634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:57:26.236306  567634 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 09:57:26.236325  567634 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 09:57:26.236362  567634 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 09:57:26.244190  567634 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:57:26.244939  567634 kubeconfig.go:125] found "pause-717282" server: "https://192.168.103.2:8443"
	I1115 09:57:26.245848  567634 kapi.go:59] client config for pause-717282: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 09:57:26.246255  567634 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1115 09:57:26.246269  567634 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1115 09:57:26.246274  567634 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1115 09:57:26.246279  567634 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1115 09:57:26.246283  567634 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1115 09:57:26.246672  567634 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 09:57:26.254427  567634 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1115 09:57:26.254456  567634 kubeadm.go:602] duration metric: took 18.126221ms to restartPrimaryControlPlane
	I1115 09:57:26.254465  567634 kubeadm.go:403] duration metric: took 69.011622ms to StartCluster
	I1115 09:57:26.254479  567634 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:26.254543  567634 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:57:26.256018  567634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:57:26.256315  567634 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:57:26.256379  567634 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 09:57:26.256629  567634 config.go:182] Loaded profile config "pause-717282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:57:26.258520  567634 out.go:179] * Enabled addons: 
	I1115 09:57:26.258520  567634 out.go:179] * Verifying Kubernetes components...
	I1115 09:57:23.340162  564357 out.go:252]   - Generating certificates and keys ...
	I1115 09:57:23.340257  564357 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 09:57:23.340343  564357 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 09:57:23.520321  564357 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 09:57:23.790623  564357 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 09:57:24.170953  564357 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 09:57:24.255787  564357 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 09:57:24.414335  564357 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 09:57:24.414528  564357 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-450177 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1115 09:57:24.706223  564357 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 09:57:24.706368  564357 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-450177 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1115 09:57:24.765525  564357 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 09:57:25.057873  564357 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 09:57:25.160821  564357 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 09:57:25.160999  564357 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 09:57:25.734866  564357 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 09:57:26.259705  567634 addons.go:515] duration metric: took 3.339832ms for enable addons: enabled=[]
	I1115 09:57:26.259732  567634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:57:26.377251  567634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:57:26.391549  567634 node_ready.go:35] waiting up to 6m0s for node "pause-717282" to be "Ready" ...
	I1115 09:57:26.399606  567634 node_ready.go:49] node "pause-717282" is "Ready"
	I1115 09:57:26.399637  567634 node_ready.go:38] duration metric: took 8.051637ms for node "pause-717282" to be "Ready" ...
	I1115 09:57:26.399653  567634 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:57:26.399705  567634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:57:26.411712  567634 api_server.go:72] duration metric: took 155.3604ms to wait for apiserver process to appear ...
	I1115 09:57:26.411741  567634 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:57:26.411761  567634 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 09:57:26.416788  567634 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 09:57:26.417758  567634 api_server.go:141] control plane version: v1.34.1
	I1115 09:57:26.417785  567634 api_server.go:131] duration metric: took 6.036453ms to wait for apiserver health ...
	I1115 09:57:26.417796  567634 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:57:26.420830  567634 system_pods.go:59] 7 kube-system pods found
	I1115 09:57:26.420858  567634 system_pods.go:61] "coredns-66bc5c9577-8rvls" [dbd58b44-1d9a-428a-ab72-1c53e2329819] Running
	I1115 09:57:26.420866  567634 system_pods.go:61] "etcd-pause-717282" [f97712bd-4d6b-40f1-8135-aa322569a888] Running
	I1115 09:57:26.420872  567634 system_pods.go:61] "kindnet-mgc2d" [9d5715ed-b45b-4e26-b01e-11cc5c70b606] Running
	I1115 09:57:26.420878  567634 system_pods.go:61] "kube-apiserver-pause-717282" [952ff006-8bdb-41a9-bc42-322899d2bd04] Running
	I1115 09:57:26.420886  567634 system_pods.go:61] "kube-controller-manager-pause-717282" [df3b40b6-6294-4f5b-85e7-5c9192e05877] Running
	I1115 09:57:26.420892  567634 system_pods.go:61] "kube-proxy-f24b6" [c143796c-42fe-4540-9d67-1c46241d2e12] Running
	I1115 09:57:26.420901  567634 system_pods.go:61] "kube-scheduler-pause-717282" [a3430311-abf1-4802-8ce4-eea1311967b6] Running
	I1115 09:57:26.420907  567634 system_pods.go:74] duration metric: took 3.10461ms to wait for pod list to return data ...
	I1115 09:57:26.420919  567634 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:57:26.422836  567634 default_sa.go:45] found service account: "default"
	I1115 09:57:26.422859  567634 default_sa.go:55] duration metric: took 1.930301ms for default service account to be created ...
	I1115 09:57:26.422868  567634 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:57:26.425187  567634 system_pods.go:86] 7 kube-system pods found
	I1115 09:57:26.425210  567634 system_pods.go:89] "coredns-66bc5c9577-8rvls" [dbd58b44-1d9a-428a-ab72-1c53e2329819] Running
	I1115 09:57:26.425217  567634 system_pods.go:89] "etcd-pause-717282" [f97712bd-4d6b-40f1-8135-aa322569a888] Running
	I1115 09:57:26.425222  567634 system_pods.go:89] "kindnet-mgc2d" [9d5715ed-b45b-4e26-b01e-11cc5c70b606] Running
	I1115 09:57:26.425227  567634 system_pods.go:89] "kube-apiserver-pause-717282" [952ff006-8bdb-41a9-bc42-322899d2bd04] Running
	I1115 09:57:26.425236  567634 system_pods.go:89] "kube-controller-manager-pause-717282" [df3b40b6-6294-4f5b-85e7-5c9192e05877] Running
	I1115 09:57:26.425241  567634 system_pods.go:89] "kube-proxy-f24b6" [c143796c-42fe-4540-9d67-1c46241d2e12] Running
	I1115 09:57:26.425246  567634 system_pods.go:89] "kube-scheduler-pause-717282" [a3430311-abf1-4802-8ce4-eea1311967b6] Running
	I1115 09:57:26.425259  567634 system_pods.go:126] duration metric: took 2.380039ms to wait for k8s-apps to be running ...
	I1115 09:57:26.425270  567634 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:57:26.425319  567634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:57:26.439569  567634 system_svc.go:56] duration metric: took 14.28634ms WaitForService to wait for kubelet
	I1115 09:57:26.439603  567634 kubeadm.go:587] duration metric: took 183.257061ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:57:26.439627  567634 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:57:26.442559  567634 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:57:26.442589  567634 node_conditions.go:123] node cpu capacity is 8
	I1115 09:57:26.442602  567634 node_conditions.go:105] duration metric: took 2.968447ms to run NodePressure ...
	I1115 09:57:26.442619  567634 start.go:242] waiting for startup goroutines ...
	I1115 09:57:26.442629  567634 start.go:247] waiting for cluster config update ...
	I1115 09:57:26.442642  567634 start.go:256] writing updated cluster config ...
	I1115 09:57:26.443012  567634 ssh_runner.go:195] Run: rm -f paused
	I1115 09:57:26.447130  567634 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:57:26.447872  567634 kapi.go:59] client config for pause-717282: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/client.crt", KeyFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/client.key", CAFile:"/home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1115 09:57:26.450908  567634 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8rvls" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.454942  567634 pod_ready.go:94] pod "coredns-66bc5c9577-8rvls" is "Ready"
	I1115 09:57:26.454964  567634 pod_ready.go:86] duration metric: took 4.035957ms for pod "coredns-66bc5c9577-8rvls" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.456860  567634 pod_ready.go:83] waiting for pod "etcd-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.460471  567634 pod_ready.go:94] pod "etcd-pause-717282" is "Ready"
	I1115 09:57:26.460496  567634 pod_ready.go:86] duration metric: took 3.614532ms for pod "etcd-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.462379  567634 pod_ready.go:83] waiting for pod "kube-apiserver-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.465764  567634 pod_ready.go:94] pod "kube-apiserver-pause-717282" is "Ready"
	I1115 09:57:26.465784  567634 pod_ready.go:86] duration metric: took 3.378099ms for pod "kube-apiserver-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.467843  567634 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.851460  567634 pod_ready.go:94] pod "kube-controller-manager-pause-717282" is "Ready"
	I1115 09:57:26.851492  567634 pod_ready.go:86] duration metric: took 383.622366ms for pod "kube-controller-manager-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:27.051955  567634 pod_ready.go:83] waiting for pod "kube-proxy-f24b6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:26.690648  564357 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 09:57:26.788219  564357 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 09:57:26.857641  564357 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 09:57:27.322961  564357 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 09:57:27.323385  564357 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 09:57:27.327337  564357 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 09:57:27.452348  567634 pod_ready.go:94] pod "kube-proxy-f24b6" is "Ready"
	I1115 09:57:27.452378  567634 pod_ready.go:86] duration metric: took 400.396269ms for pod "kube-proxy-f24b6" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:27.651629  567634 pod_ready.go:83] waiting for pod "kube-scheduler-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:28.051990  567634 pod_ready.go:94] pod "kube-scheduler-pause-717282" is "Ready"
	I1115 09:57:28.052016  567634 pod_ready.go:86] duration metric: took 400.362435ms for pod "kube-scheduler-pause-717282" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:57:28.052027  567634 pod_ready.go:40] duration metric: took 1.60484476s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:57:28.097677  567634 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 09:57:28.099546  567634 out.go:179] * Done! kubectl is now configured to use "pause-717282" cluster and "default" namespace by default
	I1115 09:57:25.236907  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:57:25.237487  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:57:25.237567  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:57:25.237631  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:57:25.271220  539051 cri.go:89] found id: "83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e"
	I1115 09:57:25.271248  539051 cri.go:89] found id: ""
	I1115 09:57:25.271259  539051 logs.go:282] 1 containers: [83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e]
	I1115 09:57:25.271321  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:25.276555  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:57:25.276620  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:57:25.306797  539051 cri.go:89] found id: ""
	I1115 09:57:25.306825  539051 logs.go:282] 0 containers: []
	W1115 09:57:25.306833  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:57:25.306839  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:57:25.306886  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:57:25.335340  539051 cri.go:89] found id: ""
	I1115 09:57:25.335367  539051 logs.go:282] 0 containers: []
	W1115 09:57:25.335376  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:57:25.335382  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:57:25.335453  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:57:25.365633  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:57:25.365659  539051 cri.go:89] found id: ""
	I1115 09:57:25.365671  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:57:25.365737  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:25.370024  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:57:25.370099  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:57:25.400166  539051 cri.go:89] found id: ""
	I1115 09:57:25.400195  539051 logs.go:282] 0 containers: []
	W1115 09:57:25.400205  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:57:25.400214  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:57:25.400271  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:57:25.432024  539051 cri.go:89] found id: "ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad"
	I1115 09:57:25.432045  539051 cri.go:89] found id: ""
	I1115 09:57:25.432053  539051 logs.go:282] 1 containers: [ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad]
	I1115 09:57:25.432168  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:25.436894  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:57:25.436971  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:57:25.470230  539051 cri.go:89] found id: ""
	I1115 09:57:25.470256  539051 logs.go:282] 0 containers: []
	W1115 09:57:25.470265  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:57:25.470273  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:57:25.470333  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:57:25.498849  539051 cri.go:89] found id: ""
	I1115 09:57:25.498874  539051 logs.go:282] 0 containers: []
	W1115 09:57:25.498881  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:57:25.498891  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:57:25.498901  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:57:25.578543  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:57:25.578571  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:57:25.595823  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:57:25.595852  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:57:25.656795  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:57:25.656820  539051 logs.go:123] Gathering logs for kube-apiserver [83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e] ...
	I1115 09:57:25.656836  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e"
	I1115 09:57:25.692282  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:57:25.692317  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:57:25.743541  539051 logs.go:123] Gathering logs for kube-controller-manager [ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad] ...
	I1115 09:57:25.743573  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad"
	I1115 09:57:25.772726  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:57:25.772756  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:57:25.817257  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:57:25.817302  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:57:28.349774  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:57:28.350176  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:57:28.350228  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:57:28.350277  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:57:28.377280  539051 cri.go:89] found id: "83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e"
	I1115 09:57:28.377302  539051 cri.go:89] found id: ""
	I1115 09:57:28.377319  539051 logs.go:282] 1 containers: [83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e]
	I1115 09:57:28.377371  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:28.381274  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:57:28.381342  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:57:28.411805  539051 cri.go:89] found id: ""
	I1115 09:57:28.411836  539051 logs.go:282] 0 containers: []
	W1115 09:57:28.411846  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:57:28.411854  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:57:28.411914  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:57:28.445530  539051 cri.go:89] found id: ""
	I1115 09:57:28.445560  539051 logs.go:282] 0 containers: []
	W1115 09:57:28.445570  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:57:28.445578  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:57:28.445639  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:57:28.474647  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:57:28.474665  539051 cri.go:89] found id: ""
	I1115 09:57:28.474674  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:57:28.474727  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:28.479068  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:57:28.479133  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:57:28.504485  539051 cri.go:89] found id: ""
	I1115 09:57:28.504516  539051 logs.go:282] 0 containers: []
	W1115 09:57:28.504527  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:57:28.504536  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:57:28.504608  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:57:28.531644  539051 cri.go:89] found id: "ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad"
	I1115 09:57:28.531679  539051 cri.go:89] found id: ""
	I1115 09:57:28.531690  539051 logs.go:282] 1 containers: [ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad]
	I1115 09:57:28.531748  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:57:28.535718  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:57:28.535796  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:57:28.564027  539051 cri.go:89] found id: ""
	I1115 09:57:28.564052  539051 logs.go:282] 0 containers: []
	W1115 09:57:28.564062  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:57:28.564071  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:57:28.564134  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:57:28.592339  539051 cri.go:89] found id: ""
	I1115 09:57:28.592365  539051 logs.go:282] 0 containers: []
	W1115 09:57:28.592374  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:57:28.592386  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:57:28.592434  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:57:28.669368  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:57:28.669416  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:57:28.686518  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:57:28.686546  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:57:28.753380  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:57:28.753422  539051 logs.go:123] Gathering logs for kube-apiserver [83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e] ...
	I1115 09:57:28.753438  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 83b24165feb5102560c2393021261b103e49c0f4de945ff5e3d2816e38c2a47e"
	I1115 09:57:28.790597  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:57:28.790633  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:57:28.847787  539051 logs.go:123] Gathering logs for kube-controller-manager [ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad] ...
	I1115 09:57:28.847821  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ab680bcfc8bfcb32d30bf0a9a22fdf32cec83b8587958577d1d569aff65034ad"
	I1115 09:57:28.884258  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:57:28.884295  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:57:28.935033  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:57:28.935074  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:57:27.328711  564357 out.go:252]   - Booting up control plane ...
	I1115 09:57:27.328856  564357 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 09:57:27.328963  564357 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 09:57:27.329483  564357 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 09:57:27.343527  564357 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 09:57:27.343641  564357 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 09:57:27.349952  564357 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 09:57:27.350288  564357 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 09:57:27.350360  564357 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 09:57:27.452273  564357 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 09:57:27.452452  564357 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 09:57:28.453703  564357 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001527657s
	I1115 09:57:28.457190  564357 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 09:57:28.457304  564357 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1115 09:57:28.457438  564357 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 09:57:28.457533  564357 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 09:57:30.026468  564357 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.569145441s
	I1115 09:57:30.539290  564357 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.082068295s
	I1115 09:57:32.459201  564357 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001790147s
	I1115 09:57:32.471941  564357 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 09:57:32.484599  564357 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 09:57:32.494259  564357 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 09:57:32.494576  564357 kubeadm.go:319] [mark-control-plane] Marking the node force-systemd-env-450177 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 09:57:32.505698  564357 kubeadm.go:319] [bootstrap-token] Using token: kw4gc9.5mgx548hvhh7rc4h
	
	
	==> CRI-O <==
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.034986787Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.035820823Z" level=info msg="Conmon does support the --sync option"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.035847624Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.035866841Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.036664837Z" level=info msg="Conmon does support the --sync option"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.036685062Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.040757891Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.040785651Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.04132919Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hook
s.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_m
appings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.041767426Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.041841412Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.047530461Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.085853629Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-8rvls Namespace:kube-system ID:2c036cd4313f3f5e49d949b26899c98ae2ba195e2c7fe9c74d7a87942be390f0 UID:dbd58b44-1d9a-428a-ab72-1c53e2329819 NetNS:/var/run/netns/d524d275-e9c4-4089-88bc-aa3912ffde82 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00051e080}] Aliases:map[]}"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086020858Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-8rvls for CNI network kindnet (type=ptp)"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086406268Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086432965Z" level=info msg="Starting seccomp notifier watcher"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086493819Z" level=info msg="Create NRI interface"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086609842Z" level=info msg="built-in NRI default validator is disabled"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086625684Z" level=info msg="runtime interface created"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.08663903Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086646478Z" level=info msg="runtime interface starting up..."
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086653131Z" level=info msg="starting plugins..."
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086666264Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 15 09:57:25 pause-717282 crio[2177]: time="2025-11-15T09:57:25.086952724Z" level=info msg="No systemd watchdog enabled"
	Nov 15 09:57:25 pause-717282 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	417a035d5497a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   2c036cd4313f3       coredns-66bc5c9577-8rvls               kube-system
	4be3d6cc9c884       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   24 seconds ago      Running             kube-proxy                0                   f4d1cad4cd95d       kube-proxy-f24b6                       kube-system
	44120701377c8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   6179d40af3013       kindnet-mgc2d                          kube-system
	d5226f6ec3310       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   34 seconds ago      Running             kube-apiserver            0                   9d3b9a42b0202       kube-apiserver-pause-717282            kube-system
	c955341aff41e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   34 seconds ago      Running             etcd                      0                   c0136c282e2df       etcd-pause-717282                      kube-system
	97f13c5dcb417       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   34 seconds ago      Running             kube-scheduler            0                   78378de54f868       kube-scheduler-pause-717282            kube-system
	eedd1774f1da1       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago      Running             kube-controller-manager   0                   f0d58fe430009       kube-controller-manager-pause-717282   kube-system
	
	
	==> coredns [417a035d5497ae4550d197115037bf48f90f3f5569544d6634d6d1f36a76c43b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46635 - 32877 "HINFO IN 5147139433601398208.3392774817633581474. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022565606s
	
	
	==> describe nodes <==
	Name:               pause-717282
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-717282
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=pause-717282
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_57_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:57:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-717282
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:57:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:57:23 +0000   Sat, 15 Nov 2025 09:57:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:57:23 +0000   Sat, 15 Nov 2025 09:57:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:57:23 +0000   Sat, 15 Nov 2025 09:57:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:57:23 +0000   Sat, 15 Nov 2025 09:57:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-717282
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                8bbaf300-aac3-4695-b688-b4a05ec169cb
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-8rvls                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-717282                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-mgc2d                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-717282             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-717282    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-f24b6                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-717282             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node pause-717282 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node pause-717282 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node pause-717282 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node pause-717282 event: Registered Node pause-717282 in Controller
	  Normal  NodeReady                14s   kubelet          Node pause-717282 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [c955341aff41e582eb4cf3e7968bbb7511c6d5aa6ccde02971fa779ea9ba7dcd] <==
	{"level":"warn","ts":"2025-11-15T09:56:59.778285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.789871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.803382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.815594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.825387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.835594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.844603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.854903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.863618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.873589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.899540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.908276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.915803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.923951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.932523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.947068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.954456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.962030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.981344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:56:59.984982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:57:00.007921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:57:00.014878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:57:00.022941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:57:00.088649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56956","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T09:57:15.436720Z","caller":"traceutil/trace.go:172","msg":"trace[1087238230] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"132.652909ms","start":"2025-11-15T09:57:15.304047Z","end":"2025-11-15T09:57:15.436699Z","steps":["trace[1087238230] 'process raft request'  (duration: 132.516629ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:57:33 up  1:39,  0 user,  load average: 4.39, 2.41, 1.55
	Linux pause-717282 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [44120701377c86a13941cce86ade9f62a6acf1b52ff16f8ddb305e7f21f14bf4] <==
	I1115 09:57:09.299601       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 09:57:09.299878       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1115 09:57:09.300050       1 main.go:148] setting mtu 1500 for CNI 
	I1115 09:57:09.300069       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 09:57:09.300091       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T09:57:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 09:57:09.556002       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 09:57:09.556052       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 09:57:09.556070       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 09:57:09.556250       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 09:57:09.756378       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 09:57:09.756687       1 metrics.go:72] Registering metrics
	I1115 09:57:09.756777       1 controller.go:711] "Syncing nftables rules"
	I1115 09:57:19.500489       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 09:57:19.500567       1 main.go:301] handling current node
	I1115 09:57:29.502496       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 09:57:29.502532       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d5226f6ec3310e6db8de828f50b650f234c2d4352ca764002df22a8028216813] <==
	E1115 09:57:00.668594       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1115 09:57:00.716313       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 09:57:00.720611       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 09:57:00.720689       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 09:57:00.727460       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 09:57:00.727985       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 09:57:00.813346       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:57:01.519433       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 09:57:01.523013       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 09:57:01.523026       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 09:57:02.025593       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 09:57:02.065102       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 09:57:02.122224       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 09:57:02.128458       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1115 09:57:02.129612       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 09:57:02.133619       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 09:57:02.533727       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 09:57:03.406080       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 09:57:03.415646       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 09:57:03.422905       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 09:57:07.790022       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 09:57:07.794794       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 09:57:08.537554       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 09:57:08.636805       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1115 09:57:08.636805       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [eedd1774f1da1143522fb65556223529755ab408b4a76ab65da2a9a4dd980407] <==
	I1115 09:57:07.533185       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 09:57:07.534363       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 09:57:07.534479       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 09:57:07.534497       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 09:57:07.534543       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 09:57:07.534631       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 09:57:07.534681       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 09:57:07.534690       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 09:57:07.535306       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 09:57:07.535660       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 09:57:07.535787       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 09:57:07.537487       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 09:57:07.538607       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:57:07.538693       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 09:57:07.539082       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 09:57:07.539236       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 09:57:07.539292       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 09:57:07.539303       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 09:57:07.539312       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 09:57:07.544822       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 09:57:07.546432       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-717282" podCIDRs=["10.244.0.0/24"]
	I1115 09:57:07.550680       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 09:57:07.556992       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 09:57:07.566499       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 09:57:22.487188       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4be3d6cc9c88485048b93b7f1eedbd5a0cd4cb1111e7f6c1f3469248da583895] <==
	I1115 09:57:09.155334       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:57:09.244242       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:57:09.344714       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:57:09.344795       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1115 09:57:09.344913       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:57:09.363795       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:57:09.363854       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:57:09.369221       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:57:09.369703       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:57:09.369741       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:57:09.371842       1 config.go:200] "Starting service config controller"
	I1115 09:57:09.371862       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:57:09.371871       1 config.go:309] "Starting node config controller"
	I1115 09:57:09.371890       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:57:09.371893       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:57:09.371896       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:57:09.371900       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:57:09.371881       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:57:09.371909       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:57:09.472472       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 09:57:09.472472       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:57:09.472513       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [97f13c5dcb417ee07f2a82877efd1a85d01e15f288397c88502ef56901503132] <==
	E1115 09:57:00.568075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:57:00.568145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 09:57:00.568175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:57:00.568183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:57:00.568224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:57:00.568267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:57:00.568273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:57:00.568293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:57:00.568327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:57:00.568324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:57:00.568383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:57:00.568447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 09:57:00.568530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:57:00.568535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:57:01.379116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:57:01.516790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:57:01.575709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:57:01.597863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:57:01.607910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:57:01.645366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 09:57:01.663774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:57:01.747109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:57:01.839775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:57:01.856018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1115 09:57:04.166475       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 09:57:04 pause-717282 kubelet[1317]: I1115 09:57:04.331219    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-717282" podStartSLOduration=1.331197437 podStartE2EDuration="1.331197437s" podCreationTimestamp="2025-11-15 09:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:57:04.330884864 +0000 UTC m=+1.158286821" watchObservedRunningTime="2025-11-15 09:57:04.331197437 +0000 UTC m=+1.158599390"
	Nov 15 09:57:04 pause-717282 kubelet[1317]: I1115 09:57:04.331362    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-717282" podStartSLOduration=1.331353935 podStartE2EDuration="1.331353935s" podCreationTimestamp="2025-11-15 09:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:57:04.320923294 +0000 UTC m=+1.148325248" watchObservedRunningTime="2025-11-15 09:57:04.331353935 +0000 UTC m=+1.158755881"
	Nov 15 09:57:04 pause-717282 kubelet[1317]: I1115 09:57:04.359075    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-717282" podStartSLOduration=1.359052127 podStartE2EDuration="1.359052127s" podCreationTimestamp="2025-11-15 09:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:57:04.359035267 +0000 UTC m=+1.186437216" watchObservedRunningTime="2025-11-15 09:57:04.359052127 +0000 UTC m=+1.186454071"
	Nov 15 09:57:04 pause-717282 kubelet[1317]: I1115 09:57:04.359255    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-717282" podStartSLOduration=1.359244852 podStartE2EDuration="1.359244852s" podCreationTimestamp="2025-11-15 09:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:57:04.342793735 +0000 UTC m=+1.170195686" watchObservedRunningTime="2025-11-15 09:57:04.359244852 +0000 UTC m=+1.186646805"
	Nov 15 09:57:07 pause-717282 kubelet[1317]: I1115 09:57:07.590058    1317 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 15 09:57:07 pause-717282 kubelet[1317]: I1115 09:57:07.591617    1317 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682639    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d5715ed-b45b-4e26-b01e-11cc5c70b606-lib-modules\") pod \"kindnet-mgc2d\" (UID: \"9d5715ed-b45b-4e26-b01e-11cc5c70b606\") " pod="kube-system/kindnet-mgc2d"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682692    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8ws\" (UniqueName: \"kubernetes.io/projected/9d5715ed-b45b-4e26-b01e-11cc5c70b606-kube-api-access-6h8ws\") pod \"kindnet-mgc2d\" (UID: \"9d5715ed-b45b-4e26-b01e-11cc5c70b606\") " pod="kube-system/kindnet-mgc2d"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682737    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c143796c-42fe-4540-9d67-1c46241d2e12-kube-proxy\") pod \"kube-proxy-f24b6\" (UID: \"c143796c-42fe-4540-9d67-1c46241d2e12\") " pod="kube-system/kube-proxy-f24b6"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682763    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c143796c-42fe-4540-9d67-1c46241d2e12-xtables-lock\") pod \"kube-proxy-f24b6\" (UID: \"c143796c-42fe-4540-9d67-1c46241d2e12\") " pod="kube-system/kube-proxy-f24b6"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682787    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pr96\" (UniqueName: \"kubernetes.io/projected/c143796c-42fe-4540-9d67-1c46241d2e12-kube-api-access-7pr96\") pod \"kube-proxy-f24b6\" (UID: \"c143796c-42fe-4540-9d67-1c46241d2e12\") " pod="kube-system/kube-proxy-f24b6"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682811    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9d5715ed-b45b-4e26-b01e-11cc5c70b606-cni-cfg\") pod \"kindnet-mgc2d\" (UID: \"9d5715ed-b45b-4e26-b01e-11cc5c70b606\") " pod="kube-system/kindnet-mgc2d"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682838    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d5715ed-b45b-4e26-b01e-11cc5c70b606-xtables-lock\") pod \"kindnet-mgc2d\" (UID: \"9d5715ed-b45b-4e26-b01e-11cc5c70b606\") " pod="kube-system/kindnet-mgc2d"
	Nov 15 09:57:08 pause-717282 kubelet[1317]: I1115 09:57:08.682890    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c143796c-42fe-4540-9d67-1c46241d2e12-lib-modules\") pod \"kube-proxy-f24b6\" (UID: \"c143796c-42fe-4540-9d67-1c46241d2e12\") " pod="kube-system/kube-proxy-f24b6"
	Nov 15 09:57:09 pause-717282 kubelet[1317]: I1115 09:57:09.332517    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-mgc2d" podStartSLOduration=1.332493792 podStartE2EDuration="1.332493792s" podCreationTimestamp="2025-11-15 09:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:57:09.332274706 +0000 UTC m=+6.159676661" watchObservedRunningTime="2025-11-15 09:57:09.332493792 +0000 UTC m=+6.159895745"
	Nov 15 09:57:09 pause-717282 kubelet[1317]: I1115 09:57:09.332678    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f24b6" podStartSLOduration=1.332664802 podStartE2EDuration="1.332664802s" podCreationTimestamp="2025-11-15 09:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:57:09.320014358 +0000 UTC m=+6.147416310" watchObservedRunningTime="2025-11-15 09:57:09.332664802 +0000 UTC m=+6.160066758"
	Nov 15 09:57:19 pause-717282 kubelet[1317]: I1115 09:57:19.771435    1317 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 09:57:19 pause-717282 kubelet[1317]: I1115 09:57:19.864933    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbd58b44-1d9a-428a-ab72-1c53e2329819-config-volume\") pod \"coredns-66bc5c9577-8rvls\" (UID: \"dbd58b44-1d9a-428a-ab72-1c53e2329819\") " pod="kube-system/coredns-66bc5c9577-8rvls"
	Nov 15 09:57:19 pause-717282 kubelet[1317]: I1115 09:57:19.864982    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zfb\" (UniqueName: \"kubernetes.io/projected/dbd58b44-1d9a-428a-ab72-1c53e2329819-kube-api-access-r5zfb\") pod \"coredns-66bc5c9577-8rvls\" (UID: \"dbd58b44-1d9a-428a-ab72-1c53e2329819\") " pod="kube-system/coredns-66bc5c9577-8rvls"
	Nov 15 09:57:20 pause-717282 kubelet[1317]: I1115 09:57:20.350249    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8rvls" podStartSLOduration=12.350225965 podStartE2EDuration="12.350225965s" podCreationTimestamp="2025-11-15 09:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:57:20.349252829 +0000 UTC m=+17.176654778" watchObservedRunningTime="2025-11-15 09:57:20.350225965 +0000 UTC m=+17.177627917"
	Nov 15 09:57:28 pause-717282 kubelet[1317]: I1115 09:57:28.521783    1317 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 15 09:57:28 pause-717282 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 09:57:28 pause-717282 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 09:57:28 pause-717282 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 15 09:57:28 pause-717282 systemd[1]: kubelet.service: Consumed 1.177s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-717282 -n pause-717282
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-717282 -n pause-717282: exit status 2 (398.657127ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-717282 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-335655 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-335655 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (252.298169ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:59:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-335655 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-335655 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-335655 describe deploy/metrics-server -n kube-system: exit status 1 (62.496062ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-335655 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-335655
helpers_test.go:243: (dbg) docker inspect old-k8s-version-335655:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482",
	        "Created": "2025-11-15T09:58:23.178019961Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 587116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:58:23.219924356Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482/hostname",
	        "HostsPath": "/var/lib/docker/containers/e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482/hosts",
	        "LogPath": "/var/lib/docker/containers/e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482/e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482-json.log",
	        "Name": "/old-k8s-version-335655",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-335655:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-335655",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482",
	                "LowerDir": "/var/lib/docker/overlay2/511bf1a954888ba81e4e64e727b739994a85683cfd70df622078393659c03bfa-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/511bf1a954888ba81e4e64e727b739994a85683cfd70df622078393659c03bfa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/511bf1a954888ba81e4e64e727b739994a85683cfd70df622078393659c03bfa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/511bf1a954888ba81e4e64e727b739994a85683cfd70df622078393659c03bfa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-335655",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-335655/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-335655",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-335655",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-335655",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5e00de43c0ee6710083a68739e2ffe42fc10c1e5cb6f4c1a3c4c467d663bdf05",
	            "SandboxKey": "/var/run/docker/netns/5e00de43c0ee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-335655": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5f22abf6c460e469b71da8d9c04b0cc70f79b863fc7fb95c973cc15281dd62ec",
	                    "EndpointID": "a6b62bfc9e5c6da2e6efa7a5ea213f481e15c51a84c614a98ef8181611e8761e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "42:b8:5f:aa:10:12",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-335655",
	                        "e7381b09c1c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335655 -n old-k8s-version-335655
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-335655 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-335655 logs -n 25: (1.136377234s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p cilium-034018                                                                                                                                                                                                                              │ cilium-034018             │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ start   │ -p force-systemd-env-450177 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-450177  │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ start   │ -p NoKubernetes-941483 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ start   │ -p pause-717282 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-717282              │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ pause   │ -p pause-717282 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-717282              │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ delete  │ -p pause-717282                                                                                                                                                                                                                               │ pause-717282              │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ delete  │ -p force-systemd-env-450177                                                                                                                                                                                                                   │ force-systemd-env-450177  │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ start   │ -p force-systemd-flag-896620 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-896620 │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ start   │ -p cert-expiration-341243 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-341243    │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:58 UTC │
	│ delete  │ -p NoKubernetes-941483                                                                                                                                                                                                                        │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ start   │ -p NoKubernetes-941483 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ ssh     │ -p NoKubernetes-941483 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ force-systemd-flag-896620 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-896620 │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:58 UTC │
	│ delete  │ -p force-systemd-flag-896620                                                                                                                                                                                                                  │ force-systemd-flag-896620 │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p cert-options-759344 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ stop    │ -p NoKubernetes-941483                                                                                                                                                                                                                        │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p NoKubernetes-941483 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ ssh     │ -p NoKubernetes-941483 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │                     │
	│ delete  │ -p NoKubernetes-941483                                                                                                                                                                                                                        │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p old-k8s-version-335655 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:59 UTC │
	│ ssh     │ cert-options-759344 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ ssh     │ -p cert-options-759344 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ delete  │ -p cert-options-759344                                                                                                                                                                                                                        │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-559401         │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-335655 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:58:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:58:29.874516  589862 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:58:29.874820  589862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:58:29.874832  589862 out.go:374] Setting ErrFile to fd 2...
	I1115 09:58:29.874838  589862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:58:29.875092  589862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:58:29.875635  589862 out.go:368] Setting JSON to false
	I1115 09:58:29.876824  589862 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6051,"bootTime":1763194659,"procs":285,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:58:29.876941  589862 start.go:143] virtualization: kvm guest
	I1115 09:58:29.879225  589862 out.go:179] * [no-preload-559401] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:58:29.880796  589862 notify.go:221] Checking for updates...
	I1115 09:58:29.880848  589862 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:58:29.882225  589862 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:58:29.883821  589862 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:58:29.885862  589862 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 09:58:29.887184  589862 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:58:29.889102  589862 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:58:29.890994  589862 config.go:182] Loaded profile config "cert-expiration-341243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:58:29.891132  589862 config.go:182] Loaded profile config "kubernetes-upgrade-405833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:58:29.891265  589862 config.go:182] Loaded profile config "old-k8s-version-335655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 09:58:29.891417  589862 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:58:29.917974  589862 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:58:29.918150  589862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:58:29.984949  589862 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 09:58:29.974075987 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:58:29.985133  589862 docker.go:319] overlay module found
	I1115 09:58:29.987254  589862 out.go:179] * Using the docker driver based on user configuration
	I1115 09:58:29.988613  589862 start.go:309] selected driver: docker
	I1115 09:58:29.988636  589862 start.go:930] validating driver "docker" against <nil>
	I1115 09:58:29.988651  589862 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:58:29.989314  589862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:58:30.056142  589862 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 09:58:30.044702878 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:58:30.056331  589862 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:58:30.056639  589862 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:58:30.058568  589862 out.go:179] * Using Docker driver with root privileges
	I1115 09:58:30.059840  589862 cni.go:84] Creating CNI manager for ""
	I1115 09:58:30.059920  589862 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:58:30.059939  589862 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 09:58:30.060019  589862 start.go:353] cluster config:
	{Name:no-preload-559401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-559401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:58:30.061582  589862 out.go:179] * Starting "no-preload-559401" primary control-plane node in "no-preload-559401" cluster
	I1115 09:58:30.062897  589862 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:58:30.064280  589862 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:58:30.065517  589862 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:58:30.065605  589862 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:58:30.065633  589862 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/config.json ...
	I1115 09:58:30.065669  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/config.json: {Name:mkfae10aca1bc64f8ae312397b6f0f9d7f37cf88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:30.065861  589862 cache.go:107] acquiring lock: {Name:mk5f28db5350cb83d4ee10bd319ac89dc2575176 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.065863  589862 cache.go:107] acquiring lock: {Name:mk20541f119eb4401d674cb4e354d83b40cb36ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.065911  589862 cache.go:107] acquiring lock: {Name:mk8a811e12b56d44de920eef87a9a4aec36ca449 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.065914  589862 cache.go:107] acquiring lock: {Name:mk0dbec31b80757040ed2efbb15c656d1127a225 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.065958  589862 cache.go:115] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1115 09:58:30.065968  589862 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 123.702µs
	I1115 09:58:30.065978  589862 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1115 09:58:30.065872  589862 cache.go:107] acquiring lock: {Name:mk54eb1701531b2aef5f1854448ea61e0b50dc7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.065960  589862 cache.go:107] acquiring lock: {Name:mk3ddfe2b5843c63ea691168ffaaf34627ed6f51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.065988  589862 cache.go:107] acquiring lock: {Name:mka82434b9fd38bdfc8ba016f803ffb7c71c9f8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.065996  589862 cache.go:107] acquiring lock: {Name:mkffdd5e68593188f2779fed2aafa94b93d50fb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.066040  589862 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:30.066054  589862 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1115 09:58:30.066094  589862 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:30.066184  589862 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:30.066192  589862 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:30.066261  589862 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:30.066326  589862 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:30.067602  589862 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:30.067670  589862 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1115 09:58:30.067686  589862 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:30.067670  589862 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:30.067695  589862 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:30.067610  589862 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:30.067739  589862 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:30.091045  589862 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:58:30.091070  589862 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:58:30.091086  589862 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:58:30.091112  589862 start.go:360] acquireMachinesLock for no-preload-559401: {Name:mk95ac24bdde539f9c4d5f16eaa9bc055d55114d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.091226  589862 start.go:364] duration metric: took 88.763µs to acquireMachinesLock for "no-preload-559401"
	I1115 09:58:30.091258  589862 start.go:93] Provisioning new machine with config: &{Name:no-preload-559401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-559401 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:58:30.091348  589862 start.go:125] createHost starting for "" (driver="docker")
	I1115 09:58:28.127073  585980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt ...
	I1115 09:58:28.127104  585980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt: {Name:mk1dc0830bf8ce637f791a39fc95fd42778d3198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:28.127283  585980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.key ...
	I1115 09:58:28.127295  585980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.key: {Name:mkf292a6df394d42f7d220fab6b3746567ae37f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:28.127381  585980 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.key.b843a3bb
	I1115 09:58:28.127417  585980 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.crt.b843a3bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1115 09:58:28.185179  585980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.crt.b843a3bb ...
	I1115 09:58:28.185210  585980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.crt.b843a3bb: {Name:mk406e350629ae2fcd80883d9376b7d11bea8e85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:28.185380  585980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.key.b843a3bb ...
	I1115 09:58:28.185420  585980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.key.b843a3bb: {Name:mkbf51d04e1156dc6394f165e55405a0439bcd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:28.185530  585980 certs.go:382] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.crt.b843a3bb -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.crt
	I1115 09:58:28.185641  585980 certs.go:386] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.key.b843a3bb -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.key
	I1115 09:58:28.185736  585980 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.key
	I1115 09:58:28.185761  585980 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.crt with IP's: []
	I1115 09:58:28.377041  585980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.crt ...
	I1115 09:58:28.377078  585980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.crt: {Name:mk232926b85d201a97a0d79ea38308091e816d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:28.377277  585980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.key ...
	I1115 09:58:28.377297  585980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.key: {Name:mkc65c93a414b464314b39175815b9bf5583609b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:28.377526  585980 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:58:28.377587  585980 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:58:28.377601  585980 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:58:28.377642  585980 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:58:28.377680  585980 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:58:28.377720  585980 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:58:28.377781  585980 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:58:28.378360  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:58:28.397666  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:58:28.415298  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:58:28.433731  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:58:28.451682  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 09:58:28.469885  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 09:58:28.487862  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:58:28.505444  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 09:58:28.524769  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:58:28.544041  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:58:28.562599  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:58:28.579861  585980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:58:28.593228  585980 ssh_runner.go:195] Run: openssl version
	I1115 09:58:28.599738  585980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:58:28.608404  585980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:58:28.612119  585980 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:58:28.612172  585980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:58:28.647736  585980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:58:28.657115  585980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:58:28.666082  585980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:58:28.670350  585980 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:58:28.670449  585980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:58:28.706327  585980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:58:28.715938  585980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:58:28.724867  585980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:58:28.728703  585980 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:58:28.728761  585980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:58:28.763128  585980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:58:28.772171  585980 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:58:28.775805  585980 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:58:28.775858  585980 kubeadm.go:401] StartCluster: {Name:old-k8s-version-335655 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-335655 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:58:28.775949  585980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:58:28.776014  585980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:58:28.809321  585980 cri.go:89] found id: ""
	I1115 09:58:28.809409  585980 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:58:28.820377  585980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:58:28.831584  585980 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 09:58:28.831648  585980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:58:28.842237  585980 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 09:58:28.842257  585980 kubeadm.go:158] found existing configuration files:
	
	I1115 09:58:28.842304  585980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 09:58:28.852331  585980 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 09:58:28.852413  585980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 09:58:28.861894  585980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 09:58:28.871092  585980 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 09:58:28.871159  585980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:58:28.879747  585980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 09:58:28.889243  585980 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 09:58:28.889307  585980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:58:28.897561  585980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 09:58:28.907063  585980 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 09:58:28.907127  585980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:58:28.918024  585980 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 09:58:29.012542  585980 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 09:58:29.090187  585980 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 09:58:31.984459  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:58:31.984977  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:58:31.985038  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:58:31.985116  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:58:32.014564  539051 cri.go:89] found id: "6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:32.014603  539051 cri.go:89] found id: ""
	I1115 09:58:32.014616  539051 logs.go:282] 1 containers: [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83]
	I1115 09:58:32.014682  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:32.018959  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:58:32.019043  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:58:32.048192  539051 cri.go:89] found id: ""
	I1115 09:58:32.048221  539051 logs.go:282] 0 containers: []
	W1115 09:58:32.048233  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:58:32.048242  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:58:32.048297  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:58:32.079490  539051 cri.go:89] found id: ""
	I1115 09:58:32.079514  539051 logs.go:282] 0 containers: []
	W1115 09:58:32.079522  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:58:32.079530  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:58:32.079585  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:58:32.111329  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:32.111355  539051 cri.go:89] found id: ""
	I1115 09:58:32.111366  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:58:32.111453  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:32.118840  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:58:32.118918  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:58:32.155586  539051 cri.go:89] found id: ""
	I1115 09:58:32.155616  539051 logs.go:282] 0 containers: []
	W1115 09:58:32.155626  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:58:32.155634  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:58:32.155697  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:58:32.186730  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:32.186760  539051 cri.go:89] found id: ""
	I1115 09:58:32.186770  539051 logs.go:282] 1 containers: [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:58:32.186837  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:32.191286  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:58:32.191350  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:58:32.218750  539051 cri.go:89] found id: ""
	I1115 09:58:32.218780  539051 logs.go:282] 0 containers: []
	W1115 09:58:32.218791  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:58:32.218800  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:58:32.218871  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:58:32.247632  539051 cri.go:89] found id: ""
	I1115 09:58:32.247659  539051 logs.go:282] 0 containers: []
	W1115 09:58:32.247668  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:58:32.247681  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:58:32.247695  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:58:32.292636  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:58:32.292679  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:58:32.325839  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:58:32.325868  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:58:32.408867  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:58:32.408905  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:58:32.426739  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:58:32.426774  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:58:32.496095  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:58:32.496113  539051 logs.go:123] Gathering logs for kube-apiserver [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83] ...
	I1115 09:58:32.496126  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:32.527334  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:58:32.527365  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:32.579145  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:58:32.579201  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:30.093531  589862 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 09:58:30.093811  589862 start.go:159] libmachine.API.Create for "no-preload-559401" (driver="docker")
	I1115 09:58:30.093852  589862 client.go:173] LocalClient.Create starting
	I1115 09:58:30.093933  589862 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem
	I1115 09:58:30.093982  589862 main.go:143] libmachine: Decoding PEM data...
	I1115 09:58:30.094003  589862 main.go:143] libmachine: Parsing certificate...
	I1115 09:58:30.094071  589862 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem
	I1115 09:58:30.094097  589862 main.go:143] libmachine: Decoding PEM data...
	I1115 09:58:30.094114  589862 main.go:143] libmachine: Parsing certificate...
	I1115 09:58:30.094582  589862 cli_runner.go:164] Run: docker network inspect no-preload-559401 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 09:58:30.113798  589862 cli_runner.go:211] docker network inspect no-preload-559401 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 09:58:30.113867  589862 network_create.go:284] running [docker network inspect no-preload-559401] to gather additional debugging logs...
	I1115 09:58:30.113885  589862 cli_runner.go:164] Run: docker network inspect no-preload-559401
	W1115 09:58:30.133288  589862 cli_runner.go:211] docker network inspect no-preload-559401 returned with exit code 1
	I1115 09:58:30.133326  589862 network_create.go:287] error running [docker network inspect no-preload-559401]: docker network inspect no-preload-559401: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-559401 not found
	I1115 09:58:30.133356  589862 network_create.go:289] output of [docker network inspect no-preload-559401]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-559401 not found
	
	** /stderr **
	I1115 09:58:30.133512  589862 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:58:30.154570  589862 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a8fb985664d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:ab:70:dd:9f:65} reservation:<nil>}
	I1115 09:58:30.155902  589862 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cc9c79f9c19e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:9a:52:90:2e:14} reservation:<nil>}
	I1115 09:58:30.156422  589862 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-309565720ebf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:66:38:13:6a:5d} reservation:<nil>}
	I1115 09:58:30.156864  589862 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4664d9872852 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a2:5a:7a:5f:0d:bf} reservation:<nil>}
	I1115 09:58:30.157366  589862 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5f22abf6c460 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:3a:fa:c2:83:36:45} reservation:<nil>}
	I1115 09:58:30.157898  589862 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-b93b691a24ad IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:12:3c:53:f1:ac:76} reservation:<nil>}
	I1115 09:58:30.158603  589862 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d8b620}
	I1115 09:58:30.158625  589862 network_create.go:124] attempt to create docker network no-preload-559401 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1115 09:58:30.158685  589862 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-559401 no-preload-559401
	I1115 09:58:30.212478  589862 network_create.go:108] docker network no-preload-559401 192.168.103.0/24 created
	I1115 09:58:30.212513  589862 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-559401" container
	I1115 09:58:30.212589  589862 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 09:58:30.231647  589862 cli_runner.go:164] Run: docker volume create no-preload-559401 --label name.minikube.sigs.k8s.io=no-preload-559401 --label created_by.minikube.sigs.k8s.io=true
	I1115 09:58:30.233919  589862 cache.go:162] opening:  /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1115 09:58:30.242312  589862 cache.go:162] opening:  /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1115 09:58:30.243685  589862 cache.go:162] opening:  /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1115 09:58:30.253127  589862 oci.go:103] Successfully created a docker volume no-preload-559401
	I1115 09:58:30.253217  589862 cli_runner.go:164] Run: docker run --rm --name no-preload-559401-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-559401 --entrypoint /usr/bin/test -v no-preload-559401:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 09:58:30.258087  589862 cache.go:162] opening:  /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1115 09:58:30.268771  589862 cache.go:162] opening:  /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1115 09:58:30.278033  589862 cache.go:162] opening:  /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1115 09:58:30.283432  589862 cache.go:162] opening:  /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1115 09:58:30.356888  589862 cache.go:157] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1115 09:58:30.356918  589862 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 291.008829ms
	I1115 09:58:30.356934  589862 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1115 09:58:30.705162  589862 cache.go:157] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1115 09:58:30.705196  589862 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 639.345096ms
	I1115 09:58:30.705212  589862 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1115 09:58:30.723157  589862 oci.go:107] Successfully prepared a docker volume no-preload-559401
	I1115 09:58:30.723196  589862 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1115 09:58:30.723272  589862 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1115 09:58:30.723299  589862 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1115 09:58:30.723343  589862 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 09:58:30.779896  589862 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-559401 --name no-preload-559401 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-559401 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-559401 --network no-preload-559401 --ip 192.168.103.2 --volume no-preload-559401:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 09:58:31.140454  589862 cli_runner.go:164] Run: docker container inspect no-preload-559401 --format={{.State.Running}}
	I1115 09:58:31.163830  589862 cli_runner.go:164] Run: docker container inspect no-preload-559401 --format={{.State.Status}}
	I1115 09:58:31.184769  589862 cli_runner.go:164] Run: docker exec no-preload-559401 stat /var/lib/dpkg/alternatives/iptables
	I1115 09:58:31.238067  589862 oci.go:144] the created container "no-preload-559401" has a running status.
	I1115 09:58:31.238095  589862 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa...
	I1115 09:58:31.286215  589862 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 09:58:31.329772  589862 cli_runner.go:164] Run: docker container inspect no-preload-559401 --format={{.State.Status}}
	I1115 09:58:31.353370  589862 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 09:58:31.353415  589862 kic_runner.go:114] Args: [docker exec --privileged no-preload-559401 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 09:58:31.404309  589862 cli_runner.go:164] Run: docker container inspect no-preload-559401 --format={{.State.Status}}
	I1115 09:58:31.429636  589862 machine.go:94] provisionDockerMachine start ...
	I1115 09:58:31.429740  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:31.454560  589862 main.go:143] libmachine: Using SSH client type: native
	I1115 09:58:31.454909  589862 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1115 09:58:31.454934  589862 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:58:31.455868  589862 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37522->127.0.0.1:33434: read: connection reset by peer
	I1115 09:58:31.708803  589862 cache.go:157] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1115 09:58:31.708901  589862 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.642916341s
	I1115 09:58:31.708923  589862 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1115 09:58:31.773506  589862 cache.go:157] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1115 09:58:31.773547  589862 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.707693668s
	I1115 09:58:31.773571  589862 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1115 09:58:31.775573  589862 cache.go:157] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1115 09:58:31.775605  589862 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.70964456s
	I1115 09:58:31.775622  589862 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1115 09:58:31.815339  589862 cache.go:157] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1115 09:58:31.815369  589862 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.749457567s
	I1115 09:58:31.815384  589862 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1115 09:58:32.163494  589862 cache.go:157] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1115 09:58:32.163531  589862 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.097603485s
	I1115 09:58:32.163547  589862 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1115 09:58:32.163567  589862 cache.go:87] Successfully saved all images to host disk.
	I1115 09:58:34.594835  589862 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-559401
	
	I1115 09:58:34.594884  589862 ubuntu.go:182] provisioning hostname "no-preload-559401"
	I1115 09:58:34.594968  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:34.613706  589862 main.go:143] libmachine: Using SSH client type: native
	I1115 09:58:34.613933  589862 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1115 09:58:34.613948  589862 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-559401 && echo "no-preload-559401" | sudo tee /etc/hostname
	I1115 09:58:34.755990  589862 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-559401
	
	I1115 09:58:34.756080  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:34.775532  589862 main.go:143] libmachine: Using SSH client type: native
	I1115 09:58:34.775774  589862 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1115 09:58:34.775792  589862 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-559401' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-559401/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-559401' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:58:34.906384  589862 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:58:34.906444  589862 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:58:34.906476  589862 ubuntu.go:190] setting up certificates
	I1115 09:58:34.906500  589862 provision.go:84] configureAuth start
	I1115 09:58:34.906581  589862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-559401
	I1115 09:58:34.926843  589862 provision.go:143] copyHostCerts
	I1115 09:58:34.926916  589862 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:58:34.926932  589862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:58:34.927018  589862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:58:34.927134  589862 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:58:34.927146  589862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:58:34.927189  589862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:58:34.927267  589862 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:58:34.927277  589862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:58:34.927315  589862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:58:34.927404  589862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.no-preload-559401 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-559401]
	I1115 09:58:35.219274  589862 provision.go:177] copyRemoteCerts
	I1115 09:58:35.219345  589862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:58:35.219409  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:35.241686  589862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa Username:docker}
	I1115 09:58:35.345507  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:58:35.370231  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 09:58:35.394192  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 09:58:35.418598  589862 provision.go:87] duration metric: took 512.076108ms to configureAuth
	I1115 09:58:35.418731  589862 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:58:35.418944  589862 config.go:182] Loaded profile config "no-preload-559401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:58:35.419062  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:35.443736  589862 main.go:143] libmachine: Using SSH client type: native
	I1115 09:58:35.444029  589862 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1115 09:58:35.444062  589862 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:58:35.720578  589862 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:58:35.720603  589862 machine.go:97] duration metric: took 4.290945095s to provisionDockerMachine
	I1115 09:58:35.720615  589862 client.go:176] duration metric: took 5.626752422s to LocalClient.Create
	I1115 09:58:35.720639  589862 start.go:167] duration metric: took 5.626830168s to libmachine.API.Create "no-preload-559401"
	I1115 09:58:35.720653  589862 start.go:293] postStartSetup for "no-preload-559401" (driver="docker")
	I1115 09:58:35.720665  589862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:58:35.720742  589862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:58:35.720798  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:35.743973  589862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa Username:docker}
	I1115 09:58:35.847187  589862 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:58:35.851307  589862 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:58:35.851341  589862 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:58:35.851355  589862 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:58:35.851432  589862 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:58:35.851531  589862 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:58:35.851662  589862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:58:35.861180  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:58:35.882717  589862 start.go:296] duration metric: took 162.047003ms for postStartSetup
	I1115 09:58:35.883027  589862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-559401
	I1115 09:58:35.902579  589862 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/config.json ...
	I1115 09:58:35.902870  589862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:58:35.902915  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:35.921170  589862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa Username:docker}
	I1115 09:58:36.013132  589862 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:58:36.018024  589862 start.go:128] duration metric: took 5.926656055s to createHost
	I1115 09:58:36.018051  589862 start.go:83] releasing machines lock for "no-preload-559401", held for 5.926812114s
	I1115 09:58:36.018127  589862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-559401
	I1115 09:58:36.036935  589862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:58:36.036996  589862 ssh_runner.go:195] Run: cat /version.json
	I1115 09:58:36.037045  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:36.037050  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:36.057132  589862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa Username:docker}
	I1115 09:58:36.057173  589862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa Username:docker}
	I1115 09:58:36.206632  589862 ssh_runner.go:195] Run: systemctl --version
	I1115 09:58:36.213577  589862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:58:36.248844  589862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:58:36.253674  589862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:58:36.253747  589862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:58:36.282619  589862 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 09:58:36.282648  589862 start.go:496] detecting cgroup driver to use...
	I1115 09:58:36.282686  589862 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:58:36.282756  589862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:58:36.302762  589862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:58:36.318948  589862 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:58:36.319021  589862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:58:36.339460  589862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:58:36.359517  589862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:58:36.445215  589862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:58:36.535533  589862 docker.go:234] disabling docker service ...
	I1115 09:58:36.535609  589862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:58:36.556104  589862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:58:36.569646  589862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:58:36.656598  589862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:58:36.738249  589862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:58:36.751511  589862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:58:36.766513  589862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:58:36.766587  589862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:58:36.776944  589862 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:58:36.777003  589862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:58:36.786248  589862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:58:36.795829  589862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:58:36.805386  589862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:58:36.813681  589862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:58:36.822725  589862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:58:36.837041  589862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:58:36.846317  589862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:58:36.854047  589862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:58:36.862245  589862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:58:36.949639  589862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:58:37.066568  589862 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:58:37.066643  589862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:58:37.071088  589862 start.go:564] Will wait 60s for crictl version
	I1115 09:58:37.071149  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.074966  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:58:37.102164  589862 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:58:37.102254  589862 ssh_runner.go:195] Run: crio --version
	I1115 09:58:37.135101  589862 ssh_runner.go:195] Run: crio --version
	I1115 09:58:37.168824  589862 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:58:37.522328  585980 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1115 09:58:37.522427  585980 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 09:58:37.522571  585980 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 09:58:37.522651  585980 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 09:58:37.522703  585980 kubeadm.go:319] OS: Linux
	I1115 09:58:37.522774  585980 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 09:58:37.522840  585980 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 09:58:37.522913  585980 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 09:58:37.522985  585980 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 09:58:37.523056  585980 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 09:58:37.523125  585980 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 09:58:37.523191  585980 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 09:58:37.523249  585980 kubeadm.go:319] CGROUPS_IO: enabled
	I1115 09:58:37.523357  585980 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 09:58:37.523508  585980 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 09:58:37.523625  585980 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1115 09:58:37.523716  585980 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 09:58:37.529522  585980 out.go:252]   - Generating certificates and keys ...
	I1115 09:58:37.529673  585980 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 09:58:37.529789  585980 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 09:58:37.529897  585980 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 09:58:37.529982  585980 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 09:58:37.530065  585980 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 09:58:37.530144  585980 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 09:58:37.530223  585980 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 09:58:37.530403  585980 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-335655] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 09:58:37.530477  585980 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 09:58:37.530640  585980 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-335655] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 09:58:37.530744  585980 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 09:58:37.530838  585980 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 09:58:37.530904  585980 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 09:58:37.530979  585980 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 09:58:37.531046  585980 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 09:58:37.531119  585980 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 09:58:37.531208  585980 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 09:58:37.531289  585980 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 09:58:37.531446  585980 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 09:58:37.531531  585980 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 09:58:37.533125  585980 out.go:252]   - Booting up control plane ...
	I1115 09:58:37.533639  585980 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 09:58:37.533800  585980 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 09:58:37.533894  585980 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 09:58:37.534052  585980 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 09:58:37.534247  585980 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 09:58:37.534337  585980 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 09:58:37.534571  585980 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1115 09:58:37.535356  585980 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.002460 seconds
	I1115 09:58:37.535637  585980 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 09:58:37.535818  585980 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 09:58:37.535902  585980 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 09:58:37.536175  585980 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-335655 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 09:58:37.536257  585980 kubeadm.go:319] [bootstrap-token] Using token: olz1a2.naoibbsbc9ube8ph
	I1115 09:58:37.541994  585980 out.go:252]   - Configuring RBAC rules ...
	I1115 09:58:37.542143  585980 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 09:58:37.542254  585980 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 09:58:37.542498  585980 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 09:58:37.542676  585980 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 09:58:37.542850  585980 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 09:58:37.542986  585980 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 09:58:37.543210  585980 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 09:58:37.543345  585980 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 09:58:37.543455  585980 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 09:58:37.543475  585980 kubeadm.go:319] 
	I1115 09:58:37.543575  585980 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 09:58:37.543587  585980 kubeadm.go:319] 
	I1115 09:58:37.543679  585980 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 09:58:37.543692  585980 kubeadm.go:319] 
	I1115 09:58:37.543723  585980 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 09:58:37.543799  585980 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 09:58:37.543871  585980 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 09:58:37.543880  585980 kubeadm.go:319] 
	I1115 09:58:37.543949  585980 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 09:58:37.543959  585980 kubeadm.go:319] 
	I1115 09:58:37.544017  585980 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 09:58:37.544026  585980 kubeadm.go:319] 
	I1115 09:58:37.544093  585980 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 09:58:37.544193  585980 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 09:58:37.544287  585980 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 09:58:37.544297  585980 kubeadm.go:319] 
	I1115 09:58:37.544420  585980 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 09:58:37.544527  585980 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 09:58:37.544541  585980 kubeadm.go:319] 
	I1115 09:58:37.544656  585980 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token olz1a2.naoibbsbc9ube8ph \
	I1115 09:58:37.544787  585980 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac \
	I1115 09:58:37.544816  585980 kubeadm.go:319] 	--control-plane 
	I1115 09:58:37.544822  585980 kubeadm.go:319] 
	I1115 09:58:37.544931  585980 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 09:58:37.544938  585980 kubeadm.go:319] 
	I1115 09:58:37.545035  585980 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token olz1a2.naoibbsbc9ube8ph \
	I1115 09:58:37.545196  585980 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac 
	I1115 09:58:37.545208  585980 cni.go:84] Creating CNI manager for ""
	I1115 09:58:37.545217  585980 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:58:37.549811  585980 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 09:58:37.551294  585980 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 09:58:37.558028  585980 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1115 09:58:37.558051  585980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 09:58:37.578448  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 09:58:35.109015  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:58:35.109592  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:58:35.109649  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:58:35.109704  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:58:35.139619  539051 cri.go:89] found id: "6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:35.139647  539051 cri.go:89] found id: ""
	I1115 09:58:35.139657  539051 logs.go:282] 1 containers: [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83]
	I1115 09:58:35.139730  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:35.144017  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:58:35.144089  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:58:35.175873  539051 cri.go:89] found id: ""
	I1115 09:58:35.175901  539051 logs.go:282] 0 containers: []
	W1115 09:58:35.175913  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:58:35.175922  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:58:35.175978  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:58:35.205507  539051 cri.go:89] found id: ""
	I1115 09:58:35.205534  539051 logs.go:282] 0 containers: []
	W1115 09:58:35.205542  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:58:35.205548  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:58:35.205610  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:58:35.237232  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:35.237259  539051 cri.go:89] found id: ""
	I1115 09:58:35.237271  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:58:35.237342  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:35.241823  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:58:35.241896  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:58:35.276658  539051 cri.go:89] found id: ""
	I1115 09:58:35.276689  539051 logs.go:282] 0 containers: []
	W1115 09:58:35.276700  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:58:35.276708  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:58:35.276775  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:58:35.309930  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:35.309956  539051 cri.go:89] found id: ""
	I1115 09:58:35.309965  539051 logs.go:282] 1 containers: [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:58:35.310025  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:35.314615  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:58:35.314699  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:58:35.349847  539051 cri.go:89] found id: ""
	I1115 09:58:35.349878  539051 logs.go:282] 0 containers: []
	W1115 09:58:35.349889  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:58:35.349902  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:58:35.349963  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:58:35.383056  539051 cri.go:89] found id: ""
	I1115 09:58:35.383084  539051 logs.go:282] 0 containers: []
	W1115 09:58:35.383095  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:58:35.383109  539051 logs.go:123] Gathering logs for kube-apiserver [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83] ...
	I1115 09:58:35.383128  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:35.425480  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:58:35.425577  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:35.487718  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:58:35.487762  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:35.521375  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:58:35.521418  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:58:35.577108  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:58:35.577154  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:58:35.613515  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:58:35.613554  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:58:35.720375  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:58:35.720427  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:58:35.743158  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:58:35.743198  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:58:35.813158  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:58:38.314463  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:58:38.314952  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:58:38.315016  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:58:38.315133  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:58:38.355363  539051 cri.go:89] found id: "6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:38.355386  539051 cri.go:89] found id: ""
	I1115 09:58:38.355419  539051 logs.go:282] 1 containers: [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83]
	I1115 09:58:38.355478  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:38.360324  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:58:38.360380  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:58:38.398537  539051 cri.go:89] found id: ""
	I1115 09:58:38.398563  539051 logs.go:282] 0 containers: []
	W1115 09:58:38.398573  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:58:38.398581  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:58:38.398646  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:58:38.438525  539051 cri.go:89] found id: ""
	I1115 09:58:38.438564  539051 logs.go:282] 0 containers: []
	W1115 09:58:38.438576  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:58:38.438584  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:58:38.438642  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:58:38.475177  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:38.475204  539051 cri.go:89] found id: ""
	I1115 09:58:38.475215  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:58:38.475282  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:38.480904  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:58:38.480989  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:58:38.518293  539051 cri.go:89] found id: ""
	I1115 09:58:38.518326  539051 logs.go:282] 0 containers: []
	W1115 09:58:38.518336  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:58:38.518343  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:58:38.518412  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:58:38.552194  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:38.552217  539051 cri.go:89] found id: ""
	I1115 09:58:38.552226  539051 logs.go:282] 1 containers: [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:58:38.552280  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:38.557164  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:58:38.557243  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:58:38.589885  539051 cri.go:89] found id: ""
	I1115 09:58:38.589913  539051 logs.go:282] 0 containers: []
	W1115 09:58:38.589925  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:58:38.589934  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:58:38.590002  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:58:38.625434  539051 cri.go:89] found id: ""
	I1115 09:58:38.625466  539051 logs.go:282] 0 containers: []
	W1115 09:58:38.625478  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:58:38.625491  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:58:38.625504  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:58:38.686872  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:58:38.686910  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:58:38.722145  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:58:38.722182  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:58:38.853786  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:58:38.853821  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:58:38.874289  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:58:38.874329  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:58:38.951416  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:58:38.951444  539051 logs.go:123] Gathering logs for kube-apiserver [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83] ...
	I1115 09:58:38.951463  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:38.991144  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:58:38.991180  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:39.051244  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:58:39.051286  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:37.170195  589862 cli_runner.go:164] Run: docker network inspect no-preload-559401 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:58:37.194922  589862 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 09:58:37.199610  589862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:58:37.210098  589862 kubeadm.go:884] updating cluster {Name:no-preload-559401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-559401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:58:37.210240  589862 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:58:37.210289  589862 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:58:37.237265  589862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1115 09:58:37.237294  589862 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1115 09:58:37.237347  589862 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:37.237373  589862 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:37.237427  589862 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:37.237436  589862 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1115 09:58:37.237438  589862 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:37.237477  589862 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:37.237495  589862 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:37.237400  589862 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:37.238654  589862 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:37.238755  589862 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:37.238790  589862 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:37.238790  589862 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:37.238654  589862 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:37.238790  589862 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:37.238834  589862 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:37.238852  589862 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1115 09:58:37.368858  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:37.388030  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:37.390652  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:37.402893  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1115 09:58:37.416483  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:37.423499  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:37.427234  589862 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1115 09:58:37.427278  589862 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:37.427328  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.445632  589862 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1115 09:58:37.445685  589862 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:37.445739  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.449162  589862 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1115 09:58:37.449205  589862 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:37.449251  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.456823  589862 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1115 09:58:37.456873  589862 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1115 09:58:37.456925  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.462211  589862 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1115 09:58:37.462252  589862 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:37.462296  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.467693  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:37.469716  589862 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1115 09:58:37.469751  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:37.469763  589862 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:37.469776  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:37.469813  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.469846  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:37.469873  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:37.469848  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1115 09:58:37.520598  589862 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1115 09:58:37.520644  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:37.520649  589862 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:37.520683  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.520925  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:37.520988  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1115 09:58:37.521005  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:37.521059  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:37.521156  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:37.559429  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:37.559478  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:37.562902  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1115 09:58:37.563009  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:37.563105  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:37.563211  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:37.567478  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:37.603335  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:37.603731  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:37.607984  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1115 09:58:37.608024  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1115 09:58:37.608097  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1115 09:58:37.608135  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1115 09:58:37.608164  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1115 09:58:37.608230  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1115 09:58:37.613074  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1115 09:58:37.613175  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1115 09:58:37.613376  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1115 09:58:37.613539  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1115 09:58:37.637537  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1115 09:58:37.637636  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1115 09:58:37.637659  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:37.637682  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1115 09:58:37.637708  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1115 09:58:37.637728  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1115 09:58:37.637748  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1115 09:58:37.637759  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1115 09:58:37.637776  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1115 09:58:37.637830  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1115 09:58:37.637849  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1115 09:58:37.637811  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1115 09:58:37.637881  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1115 09:58:37.645638  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1115 09:58:37.645672  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1115 09:58:37.686387  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1115 09:58:37.686509  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1115 09:58:37.708289  589862 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1115 09:58:37.708371  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1115 09:58:37.775568  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1115 09:58:37.775610  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1115 09:58:38.127056  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1115 09:58:38.127105  589862 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1115 09:58:38.127168  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1115 09:58:38.588766  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:39.393213  589862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.266010739s)
	I1115 09:58:39.393247  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1115 09:58:39.393274  589862 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1115 09:58:39.393312  589862 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1115 09:58:39.393346  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1115 09:58:39.393358  589862 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:39.393436  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:39.397614  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:38.486205  585980 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 09:58:38.486290  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:38.486305  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-335655 minikube.k8s.io/updated_at=2025_11_15T09_58_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=old-k8s-version-335655 minikube.k8s.io/primary=true
	I1115 09:58:38.569942  585980 ops.go:34] apiserver oom_adj: -16
	I1115 09:58:38.570037  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:39.070852  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:39.570810  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:40.070706  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:40.570151  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:41.071113  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:41.570266  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:42.070123  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:42.570368  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:41.587462  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:58:40.679243  589862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.285866204s)
	I1115 09:58:40.679279  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1115 09:58:40.679275  589862 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.281627059s)
	I1115 09:58:40.679311  589862 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1115 09:58:40.679350  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:40.679370  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1115 09:58:40.706812  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:42.076497  589862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.397096551s)
	I1115 09:58:42.076542  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1115 09:58:42.076567  589862 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1115 09:58:42.076510  589862 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.369660524s)
	I1115 09:58:42.076663  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1115 09:58:42.076617  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1115 09:58:42.076760  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1115 09:58:43.672868  589862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.596125693s)
	I1115 09:58:43.672901  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1115 09:58:43.672866  589862 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.59607551s)
	I1115 09:58:43.672934  589862 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1115 09:58:43.672977  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1115 09:58:43.672986  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1115 09:58:43.673004  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1115 09:58:44.853738  589862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.180718097s)
	I1115 09:58:44.853771  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1115 09:58:44.853801  589862 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1115 09:58:44.853841  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1115 09:58:43.070901  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:43.570489  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:44.070154  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:44.570975  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:45.070251  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:45.571098  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:46.070925  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:46.571003  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:47.070168  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:47.571032  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:46.589587  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1115 09:58:46.589659  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:58:46.589759  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:58:46.624264  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:58:46.624289  539051 cri.go:89] found id: "6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:46.624295  539051 cri.go:89] found id: ""
	I1115 09:58:46.624305  539051 logs.go:282] 2 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8 6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83]
	I1115 09:58:46.624364  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:46.629325  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:46.633650  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:58:46.633736  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:58:46.665182  539051 cri.go:89] found id: ""
	I1115 09:58:46.665205  539051 logs.go:282] 0 containers: []
	W1115 09:58:46.665213  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:58:46.665221  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:58:46.665270  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:58:46.696039  539051 cri.go:89] found id: ""
	I1115 09:58:46.696066  539051 logs.go:282] 0 containers: []
	W1115 09:58:46.696078  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:58:46.696087  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:58:46.696142  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:58:46.727651  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:46.727678  539051 cri.go:89] found id: ""
	I1115 09:58:46.727688  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:58:46.727747  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:46.732339  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:58:46.732425  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:58:46.761424  539051 cri.go:89] found id: ""
	I1115 09:58:46.761455  539051 logs.go:282] 0 containers: []
	W1115 09:58:46.761467  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:58:46.761475  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:58:46.761540  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:58:46.790988  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:46.791014  539051 cri.go:89] found id: ""
	I1115 09:58:46.791025  539051 logs.go:282] 1 containers: [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:58:46.791081  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:46.795775  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:58:46.795838  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:58:46.828078  539051 cri.go:89] found id: ""
	I1115 09:58:46.828105  539051 logs.go:282] 0 containers: []
	W1115 09:58:46.828115  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:58:46.828123  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:58:46.828188  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:58:46.858191  539051 cri.go:89] found id: ""
	I1115 09:58:46.858217  539051 logs.go:282] 0 containers: []
	W1115 09:58:46.858225  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:58:46.858240  539051 logs.go:123] Gathering logs for kube-apiserver [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8] ...
	I1115 09:58:46.858254  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:58:46.893709  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:58:46.893740  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:46.951755  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:58:46.951792  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:58:47.012185  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:58:47.012226  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:58:47.114132  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:58:47.114170  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:58:47.133687  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:58:47.133723  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1115 09:58:48.333733  589862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.479866558s)
	I1115 09:58:48.333763  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1115 09:58:48.333787  589862 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1115 09:58:48.333840  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1115 09:58:48.887418  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1115 09:58:48.887465  589862 cache_images.go:125] Successfully loaded all cached images
	I1115 09:58:48.887471  589862 cache_images.go:94] duration metric: took 11.650162064s to LoadCachedImages
	I1115 09:58:48.887486  589862 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 09:58:48.887599  589862 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-559401 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-559401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:58:48.887681  589862 ssh_runner.go:195] Run: crio config
	I1115 09:58:48.935652  589862 cni.go:84] Creating CNI manager for ""
	I1115 09:58:48.935679  589862 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:58:48.935698  589862 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:58:48.935727  589862 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-559401 NodeName:no-preload-559401 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:58:48.935955  589862 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-559401"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:58:48.936036  589862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:58:48.944737  589862 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1115 09:58:48.944809  589862 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1115 09:58:48.953950  589862 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1115 09:58:48.954034  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1115 09:58:48.954060  589862 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1115 09:58:48.954089  589862 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1115 09:58:48.958596  589862 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1115 09:58:48.958631  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1115 09:58:48.070773  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:48.570466  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:49.070716  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:49.570991  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:50.071061  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:50.178640  585980 kubeadm.go:1114] duration metric: took 11.692414753s to wait for elevateKubeSystemPrivileges
	I1115 09:58:50.178690  585980 kubeadm.go:403] duration metric: took 21.402833585s to StartCluster
	I1115 09:58:50.178714  585980 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:50.178808  585980 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:58:50.180095  585980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:50.180339  585980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 09:58:50.180357  585980 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:58:50.180477  585980 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 09:58:50.180571  585980 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-335655"
	I1115 09:58:50.180592  585980 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-335655"
	I1115 09:58:50.180603  585980 config.go:182] Loaded profile config "old-k8s-version-335655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 09:58:50.180632  585980 host.go:66] Checking if "old-k8s-version-335655" exists ...
	I1115 09:58:50.180655  585980 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-335655"
	I1115 09:58:50.180672  585980 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-335655"
	I1115 09:58:50.181063  585980 cli_runner.go:164] Run: docker container inspect old-k8s-version-335655 --format={{.State.Status}}
	I1115 09:58:50.181276  585980 cli_runner.go:164] Run: docker container inspect old-k8s-version-335655 --format={{.State.Status}}
	I1115 09:58:50.184320  585980 out.go:179] * Verifying Kubernetes components...
	I1115 09:58:50.185582  585980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:58:50.208776  585980 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:50.209642  585980 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-335655"
	I1115 09:58:50.209728  585980 host.go:66] Checking if "old-k8s-version-335655" exists ...
	I1115 09:58:50.210281  585980 cli_runner.go:164] Run: docker container inspect old-k8s-version-335655 --format={{.State.Status}}
	I1115 09:58:50.211695  585980 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:58:50.211717  585980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 09:58:50.211770  585980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-335655
	I1115 09:58:50.243519  585980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/old-k8s-version-335655/id_rsa Username:docker}
	I1115 09:58:50.247576  585980 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 09:58:50.247608  585980 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 09:58:50.247676  585980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-335655
	I1115 09:58:50.275842  585980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/old-k8s-version-335655/id_rsa Username:docker}
	I1115 09:58:50.295150  585980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 09:58:50.350270  585980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:58:50.362178  585980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:58:50.392335  585980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 09:58:50.559073  585980 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1115 09:58:50.560112  585980 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-335655" to be "Ready" ...
	I1115 09:58:50.836571  585980 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 09:58:50.837873  585980 addons.go:515] duration metric: took 657.393104ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 09:58:51.063769  585980 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-335655" context rescaled to 1 replicas
	W1115 09:58:52.563949  585980 node_ready.go:57] node "old-k8s-version-335655" has "Ready":"False" status (will retry)
	I1115 09:58:49.900772  589862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:58:49.915081  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1115 09:58:49.920412  589862 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1115 09:58:49.920459  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1115 09:58:50.020568  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1115 09:58:50.027737  589862 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1115 09:58:50.027790  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1115 09:58:50.308923  589862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:58:50.320504  589862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 09:58:50.336119  589862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:58:50.354358  589862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1115 09:58:50.371220  589862 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 09:58:50.377002  589862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:58:50.390186  589862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:58:50.504769  589862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:58:50.535882  589862 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401 for IP: 192.168.103.2
	I1115 09:58:50.535903  589862 certs.go:195] generating shared ca certs ...
	I1115 09:58:50.535924  589862 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:50.536096  589862 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:58:50.536319  589862 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:58:50.536379  589862 certs.go:257] generating profile certs ...
	I1115 09:58:50.536551  589862 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.key
	I1115 09:58:50.536611  589862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.crt with IP's: []
	I1115 09:58:50.654774  589862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.crt ...
	I1115 09:58:50.654816  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.crt: {Name:mkf7eb6dd7672898489471e2954de98923605286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:50.655021  589862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.key ...
	I1115 09:58:50.655040  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.key: {Name:mke4b476571efd801c87de00dd4f3d2a6f4ddbbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:50.655161  589862 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.key.f25eab8b
	I1115 09:58:50.655183  589862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.crt.f25eab8b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1115 09:58:50.980637  589862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.crt.f25eab8b ...
	I1115 09:58:50.980669  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.crt.f25eab8b: {Name:mk4986c594ab003033b784ceacd55ced33e1763e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:50.980844  589862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.key.f25eab8b ...
	I1115 09:58:50.980872  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.key.f25eab8b: {Name:mka6cc1e0399e53e8bf66b9c9957ff5fd5d16d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:50.981003  589862 certs.go:382] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.crt.f25eab8b -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.crt
	I1115 09:58:50.981104  589862 certs.go:386] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.key.f25eab8b -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.key
	I1115 09:58:50.981196  589862 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.key
	I1115 09:58:50.981220  589862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.crt with IP's: []
	I1115 09:58:51.486385  589862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.crt ...
	I1115 09:58:51.486426  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.crt: {Name:mk86548d2fb9cffa7c9e24d245dabba7628d775d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:51.486620  589862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.key ...
	I1115 09:58:51.486644  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.key: {Name:mkb62ec7546a0f0eb8a891ecd6f3d1c152e38f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:51.486849  589862 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:58:51.486896  589862 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:58:51.486924  589862 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:58:51.486959  589862 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:58:51.486999  589862 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:58:51.487036  589862 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:58:51.487099  589862 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:58:51.487717  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:58:51.505824  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:58:51.524629  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:58:51.542831  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:58:51.561961  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 09:58:51.580560  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:58:51.598695  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:58:51.617058  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 09:58:51.634459  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:58:51.655672  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:58:51.674543  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:58:51.693317  589862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:58:51.706898  589862 ssh_runner.go:195] Run: openssl version
	I1115 09:58:51.713710  589862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:58:51.723372  589862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:58:51.727726  589862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:58:51.727794  589862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:58:51.778507  589862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:58:51.791304  589862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:58:51.804320  589862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:58:51.810297  589862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:58:51.810365  589862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:58:51.872433  589862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:58:51.885717  589862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:58:51.899013  589862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:58:51.904365  589862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:58:51.904462  589862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:58:51.962993  589862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:58:51.975533  589862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:58:51.980732  589862 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:58:51.980806  589862 kubeadm.go:401] StartCluster: {Name:no-preload-559401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-559401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:58:51.980917  589862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:58:51.981003  589862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:58:52.018488  589862 cri.go:89] found id: ""
	I1115 09:58:52.018652  589862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:58:52.030847  589862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:58:52.042691  589862 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 09:58:52.042765  589862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:58:52.053828  589862 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 09:58:52.053855  589862 kubeadm.go:158] found existing configuration files:
	
	I1115 09:58:52.053907  589862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 09:58:52.065475  589862 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 09:58:52.065547  589862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 09:58:52.075773  589862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 09:58:52.086227  589862 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 09:58:52.086291  589862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:58:52.096762  589862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 09:58:52.107754  589862 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 09:58:52.107818  589862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:58:52.118641  589862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 09:58:52.129548  589862 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 09:58:52.129612  589862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:58:52.140206  589862 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 09:58:52.216980  589862 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 09:58:52.295837  589862 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1115 09:58:55.064556  585980 node_ready.go:57] node "old-k8s-version-335655" has "Ready":"False" status (will retry)
	W1115 09:58:57.564147  585980 node_ready.go:57] node "old-k8s-version-335655" has "Ready":"False" status (will retry)
	I1115 09:58:57.201317  539051 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.067565178s)
	W1115 09:58:57.201371  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1115 09:58:57.201388  539051 logs.go:123] Gathering logs for kube-apiserver [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83] ...
	I1115 09:58:57.201426  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:57.246162  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:58:57.246208  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:57.279745  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:58:57.279789  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:59:01.843237  589862 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 09:59:01.843317  589862 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 09:59:01.843408  589862 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 09:59:01.843482  589862 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 09:59:01.843531  589862 kubeadm.go:319] OS: Linux
	I1115 09:59:01.843604  589862 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 09:59:01.843722  589862 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 09:59:01.843805  589862 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 09:59:01.843883  589862 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 09:59:01.843950  589862 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 09:59:01.844027  589862 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 09:59:01.844100  589862 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 09:59:01.844195  589862 kubeadm.go:319] CGROUPS_IO: enabled
	I1115 09:59:01.844304  589862 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 09:59:01.844451  589862 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 09:59:01.844630  589862 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 09:59:01.844724  589862 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 09:59:01.846951  589862 out.go:252]   - Generating certificates and keys ...
	I1115 09:59:01.847036  589862 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 09:59:01.847125  589862 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 09:59:01.847240  589862 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 09:59:01.847322  589862 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 09:59:01.847442  589862 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 09:59:01.847517  589862 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 09:59:01.847594  589862 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 09:59:01.847786  589862 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-559401] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1115 09:59:01.847867  589862 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 09:59:01.848053  589862 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-559401] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1115 09:59:01.848149  589862 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 09:59:01.848252  589862 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 09:59:01.848314  589862 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 09:59:01.848413  589862 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 09:59:01.848473  589862 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 09:59:01.848539  589862 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 09:59:01.848611  589862 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 09:59:01.848701  589862 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 09:59:01.848796  589862 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 09:59:01.848974  589862 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 09:59:01.849089  589862 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 09:59:01.850586  589862 out.go:252]   - Booting up control plane ...
	I1115 09:59:01.850699  589862 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 09:59:01.850837  589862 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 09:59:01.850930  589862 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 09:59:01.851112  589862 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 09:59:01.851252  589862 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 09:59:01.851424  589862 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 09:59:01.851564  589862 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 09:59:01.851602  589862 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 09:59:01.851716  589862 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 09:59:01.851834  589862 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 09:59:01.851926  589862 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001771287s
	I1115 09:59:01.852053  589862 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 09:59:01.852175  589862 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1115 09:59:01.852310  589862 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 09:59:01.852463  589862 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 09:59:01.852600  589862 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.284780336s
	I1115 09:59:01.852697  589862 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.748187822s
	I1115 09:59:01.852792  589862 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002085645s
	I1115 09:59:01.852910  589862 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 09:59:01.853059  589862 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 09:59:01.853112  589862 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 09:59:01.853348  589862 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-559401 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 09:59:01.853460  589862 kubeadm.go:319] [bootstrap-token] Using token: 9z2agn.qs0z4ulg6bsyvbug
	I1115 09:59:01.855000  589862 out.go:252]   - Configuring RBAC rules ...
	I1115 09:59:01.855162  589862 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 09:59:01.855277  589862 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 09:59:01.855452  589862 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 09:59:01.855626  589862 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 09:59:01.855787  589862 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 09:59:01.855903  589862 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 09:59:01.856036  589862 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 09:59:01.856117  589862 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 09:59:01.856198  589862 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 09:59:01.856211  589862 kubeadm.go:319] 
	I1115 09:59:01.856290  589862 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 09:59:01.856307  589862 kubeadm.go:319] 
	I1115 09:59:01.856422  589862 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 09:59:01.856432  589862 kubeadm.go:319] 
	I1115 09:59:01.856461  589862 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 09:59:01.856541  589862 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 09:59:01.856592  589862 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 09:59:01.856598  589862 kubeadm.go:319] 
	I1115 09:59:01.856640  589862 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 09:59:01.856645  589862 kubeadm.go:319] 
	I1115 09:59:01.856689  589862 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 09:59:01.856699  589862 kubeadm.go:319] 
	I1115 09:59:01.856756  589862 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 09:59:01.856863  589862 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 09:59:01.856942  589862 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 09:59:01.856952  589862 kubeadm.go:319] 
	I1115 09:59:01.857054  589862 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 09:59:01.857119  589862 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 09:59:01.857125  589862 kubeadm.go:319] 
	I1115 09:59:01.857232  589862 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9z2agn.qs0z4ulg6bsyvbug \
	I1115 09:59:01.857416  589862 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac \
	I1115 09:59:01.857452  589862 kubeadm.go:319] 	--control-plane 
	I1115 09:59:01.857461  589862 kubeadm.go:319] 
	I1115 09:59:01.857619  589862 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 09:59:01.857635  589862 kubeadm.go:319] 
	I1115 09:59:01.857736  589862 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9z2agn.qs0z4ulg6bsyvbug \
	I1115 09:59:01.857898  589862 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac 
	I1115 09:59:01.857918  589862 cni.go:84] Creating CNI manager for ""
	I1115 09:59:01.857931  589862 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:59:01.860665  589862 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1115 09:59:00.064033  585980 node_ready.go:57] node "old-k8s-version-335655" has "Ready":"False" status (will retry)
	W1115 09:59:02.563741  585980 node_ready.go:57] node "old-k8s-version-335655" has "Ready":"False" status (will retry)
	I1115 09:58:59.819942  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:59:01.498994  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:35380->192.168.76.2:8443: read: connection reset by peer
	I1115 09:59:01.499072  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:59:01.499138  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:59:01.529135  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:01.529161  539051 cri.go:89] found id: "6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:59:01.529165  539051 cri.go:89] found id: ""
	I1115 09:59:01.529173  539051 logs.go:282] 2 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8 6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83]
	I1115 09:59:01.529237  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:01.533524  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:01.537428  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:59:01.537497  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:59:01.565354  539051 cri.go:89] found id: ""
	I1115 09:59:01.565381  539051 logs.go:282] 0 containers: []
	W1115 09:59:01.565423  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:59:01.565433  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:59:01.565496  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:59:01.593051  539051 cri.go:89] found id: ""
	I1115 09:59:01.593080  539051 logs.go:282] 0 containers: []
	W1115 09:59:01.593090  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:59:01.593098  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:59:01.593159  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:59:01.621510  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:01.621539  539051 cri.go:89] found id: ""
	I1115 09:59:01.621550  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:59:01.621600  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:01.626015  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:59:01.626087  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:59:01.655546  539051 cri.go:89] found id: ""
	I1115 09:59:01.655571  539051 logs.go:282] 0 containers: []
	W1115 09:59:01.655579  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:59:01.655586  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:59:01.655641  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:59:01.683273  539051 cri.go:89] found id: "7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:01.683294  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:01.683298  539051 cri.go:89] found id: ""
	I1115 09:59:01.683305  539051 logs.go:282] 2 containers: [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:59:01.683360  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:01.687943  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:01.691770  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:59:01.691837  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:59:01.719169  539051 cri.go:89] found id: ""
	I1115 09:59:01.719198  539051 logs.go:282] 0 containers: []
	W1115 09:59:01.719209  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:59:01.719215  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:59:01.719280  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:59:01.747446  539051 cri.go:89] found id: ""
	I1115 09:59:01.747479  539051 logs.go:282] 0 containers: []
	W1115 09:59:01.747491  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:59:01.747511  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:59:01.747526  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:59:01.765002  539051 logs.go:123] Gathering logs for kube-apiserver [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8] ...
	I1115 09:59:01.765036  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:01.799479  539051 logs.go:123] Gathering logs for kube-apiserver [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83] ...
	I1115 09:59:01.799508  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:59:01.832627  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:59:01.832666  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:01.892833  539051 logs.go:123] Gathering logs for kube-controller-manager [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe] ...
	I1115 09:59:01.892869  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:01.923960  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:59:01.924003  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:59:01.984171  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:59:01.984204  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:59:02.053287  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:59:02.053316  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:59:02.053339  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:02.085806  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:59:02.085843  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:59:02.123372  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:59:02.123413  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:59:01.861939  589862 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 09:59:01.867990  589862 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 09:59:01.868013  589862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 09:59:01.883408  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 09:59:02.123350  589862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 09:59:02.123545  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:02.123674  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-559401 minikube.k8s.io/updated_at=2025_11_15T09_59_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=no-preload-559401 minikube.k8s.io/primary=true
	I1115 09:59:02.139292  589862 ops.go:34] apiserver oom_adj: -16
	I1115 09:59:02.214537  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:02.715557  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:03.214907  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:03.715555  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:04.215364  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:04.715369  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:04.063246  585980 node_ready.go:49] node "old-k8s-version-335655" is "Ready"
	I1115 09:59:04.063283  585980 node_ready.go:38] duration metric: took 13.503131624s for node "old-k8s-version-335655" to be "Ready" ...
	I1115 09:59:04.063325  585980 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:59:04.063384  585980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:59:04.076227  585980 api_server.go:72] duration metric: took 13.895716827s to wait for apiserver process to appear ...
	I1115 09:59:04.076261  585980 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:59:04.076287  585980 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 09:59:04.080488  585980 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 09:59:04.081698  585980 api_server.go:141] control plane version: v1.28.0
	I1115 09:59:04.081725  585980 api_server.go:131] duration metric: took 5.455488ms to wait for apiserver health ...
	I1115 09:59:04.081735  585980 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:59:04.086869  585980 system_pods.go:59] 8 kube-system pods found
	I1115 09:59:04.087029  585980 system_pods.go:61] "coredns-5dd5756b68-j8hqh" [e2853043-8da1-44cd-b87b-51cecce5b801] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:59:04.087376  585980 system_pods.go:61] "etcd-old-k8s-version-335655" [c169972f-a50f-420a-9c9f-da6a0847b99d] Running
	I1115 09:59:04.087433  585980 system_pods.go:61] "kindnet-w52sl" [44811fde-1c17-472e-9aa0-ffb839e2e4d2] Running
	I1115 09:59:04.087442  585980 system_pods.go:61] "kube-apiserver-old-k8s-version-335655" [afa65d8c-6f22-4303-aee4-c3c9b5775628] Running
	I1115 09:59:04.087447  585980 system_pods.go:61] "kube-controller-manager-old-k8s-version-335655" [d4de6043-e48a-4c33-a74d-fcf9caf6f324] Running
	I1115 09:59:04.087452  585980 system_pods.go:61] "kube-proxy-ndp6f" [771705b2-6cee-4952-b8b8-c3a6a4d8a4c7] Running
	I1115 09:59:04.087457  585980 system_pods.go:61] "kube-scheduler-old-k8s-version-335655" [4e430d3c-91c8-4730-94f4-1b811fed2ee1] Running
	I1115 09:59:04.087467  585980 system_pods.go:61] "storage-provisioner" [af2a330d-a530-455d-a428-c27df3d4ff47] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:59:04.087477  585980 system_pods.go:74] duration metric: took 5.733703ms to wait for pod list to return data ...
	I1115 09:59:04.087494  585980 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:59:04.090133  585980 default_sa.go:45] found service account: "default"
	I1115 09:59:04.090160  585980 default_sa.go:55] duration metric: took 2.6594ms for default service account to be created ...
	I1115 09:59:04.090173  585980 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:59:04.093482  585980 system_pods.go:86] 8 kube-system pods found
	I1115 09:59:04.093513  585980 system_pods.go:89] "coredns-5dd5756b68-j8hqh" [e2853043-8da1-44cd-b87b-51cecce5b801] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:59:04.093522  585980 system_pods.go:89] "etcd-old-k8s-version-335655" [c169972f-a50f-420a-9c9f-da6a0847b99d] Running
	I1115 09:59:04.093532  585980 system_pods.go:89] "kindnet-w52sl" [44811fde-1c17-472e-9aa0-ffb839e2e4d2] Running
	I1115 09:59:04.093539  585980 system_pods.go:89] "kube-apiserver-old-k8s-version-335655" [afa65d8c-6f22-4303-aee4-c3c9b5775628] Running
	I1115 09:59:04.093550  585980 system_pods.go:89] "kube-controller-manager-old-k8s-version-335655" [d4de6043-e48a-4c33-a74d-fcf9caf6f324] Running
	I1115 09:59:04.093556  585980 system_pods.go:89] "kube-proxy-ndp6f" [771705b2-6cee-4952-b8b8-c3a6a4d8a4c7] Running
	I1115 09:59:04.093561  585980 system_pods.go:89] "kube-scheduler-old-k8s-version-335655" [4e430d3c-91c8-4730-94f4-1b811fed2ee1] Running
	I1115 09:59:04.093570  585980 system_pods.go:89] "storage-provisioner" [af2a330d-a530-455d-a428-c27df3d4ff47] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:59:04.093601  585980 retry.go:31] will retry after 288.966141ms: missing components: kube-dns
	I1115 09:59:04.386773  585980 system_pods.go:86] 8 kube-system pods found
	I1115 09:59:04.386811  585980 system_pods.go:89] "coredns-5dd5756b68-j8hqh" [e2853043-8da1-44cd-b87b-51cecce5b801] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:59:04.386821  585980 system_pods.go:89] "etcd-old-k8s-version-335655" [c169972f-a50f-420a-9c9f-da6a0847b99d] Running
	I1115 09:59:04.386829  585980 system_pods.go:89] "kindnet-w52sl" [44811fde-1c17-472e-9aa0-ffb839e2e4d2] Running
	I1115 09:59:04.386835  585980 system_pods.go:89] "kube-apiserver-old-k8s-version-335655" [afa65d8c-6f22-4303-aee4-c3c9b5775628] Running
	I1115 09:59:04.386840  585980 system_pods.go:89] "kube-controller-manager-old-k8s-version-335655" [d4de6043-e48a-4c33-a74d-fcf9caf6f324] Running
	I1115 09:59:04.386844  585980 system_pods.go:89] "kube-proxy-ndp6f" [771705b2-6cee-4952-b8b8-c3a6a4d8a4c7] Running
	I1115 09:59:04.386850  585980 system_pods.go:89] "kube-scheduler-old-k8s-version-335655" [4e430d3c-91c8-4730-94f4-1b811fed2ee1] Running
	I1115 09:59:04.386857  585980 system_pods.go:89] "storage-provisioner" [af2a330d-a530-455d-a428-c27df3d4ff47] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:59:04.386882  585980 retry.go:31] will retry after 253.204914ms: missing components: kube-dns
	I1115 09:59:04.644142  585980 system_pods.go:86] 8 kube-system pods found
	I1115 09:59:04.644181  585980 system_pods.go:89] "coredns-5dd5756b68-j8hqh" [e2853043-8da1-44cd-b87b-51cecce5b801] Running
	I1115 09:59:04.644191  585980 system_pods.go:89] "etcd-old-k8s-version-335655" [c169972f-a50f-420a-9c9f-da6a0847b99d] Running
	I1115 09:59:04.644197  585980 system_pods.go:89] "kindnet-w52sl" [44811fde-1c17-472e-9aa0-ffb839e2e4d2] Running
	I1115 09:59:04.644203  585980 system_pods.go:89] "kube-apiserver-old-k8s-version-335655" [afa65d8c-6f22-4303-aee4-c3c9b5775628] Running
	I1115 09:59:04.644209  585980 system_pods.go:89] "kube-controller-manager-old-k8s-version-335655" [d4de6043-e48a-4c33-a74d-fcf9caf6f324] Running
	I1115 09:59:04.644214  585980 system_pods.go:89] "kube-proxy-ndp6f" [771705b2-6cee-4952-b8b8-c3a6a4d8a4c7] Running
	I1115 09:59:04.644218  585980 system_pods.go:89] "kube-scheduler-old-k8s-version-335655" [4e430d3c-91c8-4730-94f4-1b811fed2ee1] Running
	I1115 09:59:04.644224  585980 system_pods.go:89] "storage-provisioner" [af2a330d-a530-455d-a428-c27df3d4ff47] Running
	I1115 09:59:04.644234  585980 system_pods.go:126] duration metric: took 554.05419ms to wait for k8s-apps to be running ...
	I1115 09:59:04.644249  585980 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:59:04.644310  585980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:59:04.658747  585980 system_svc.go:56] duration metric: took 14.468086ms WaitForService to wait for kubelet
	I1115 09:59:04.658785  585980 kubeadm.go:587] duration metric: took 14.478283435s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:59:04.658805  585980 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:59:04.661765  585980 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:59:04.661793  585980 node_conditions.go:123] node cpu capacity is 8
	I1115 09:59:04.661807  585980 node_conditions.go:105] duration metric: took 2.998154ms to run NodePressure ...
	I1115 09:59:04.661819  585980 start.go:242] waiting for startup goroutines ...
	I1115 09:59:04.661825  585980 start.go:247] waiting for cluster config update ...
	I1115 09:59:04.661835  585980 start.go:256] writing updated cluster config ...
	I1115 09:59:04.662094  585980 ssh_runner.go:195] Run: rm -f paused
	I1115 09:59:04.666110  585980 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:59:04.670626  585980 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-j8hqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:04.675683  585980 pod_ready.go:94] pod "coredns-5dd5756b68-j8hqh" is "Ready"
	I1115 09:59:04.675708  585980 pod_ready.go:86] duration metric: took 5.059433ms for pod "coredns-5dd5756b68-j8hqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:04.678537  585980 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:04.683119  585980 pod_ready.go:94] pod "etcd-old-k8s-version-335655" is "Ready"
	I1115 09:59:04.683143  585980 pod_ready.go:86] duration metric: took 4.578145ms for pod "etcd-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:04.686098  585980 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:04.691448  585980 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-335655" is "Ready"
	I1115 09:59:04.691479  585980 pod_ready.go:86] duration metric: took 5.351046ms for pod "kube-apiserver-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:04.696203  585980 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:05.070297  585980 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-335655" is "Ready"
	I1115 09:59:05.070323  585980 pod_ready.go:86] duration metric: took 374.090068ms for pod "kube-controller-manager-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:05.271587  585980 pod_ready.go:83] waiting for pod "kube-proxy-ndp6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:05.671223  585980 pod_ready.go:94] pod "kube-proxy-ndp6f" is "Ready"
	I1115 09:59:05.671254  585980 pod_ready.go:86] duration metric: took 399.63459ms for pod "kube-proxy-ndp6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:05.871120  585980 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:06.270375  585980 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-335655" is "Ready"
	I1115 09:59:06.270414  585980 pod_ready.go:86] duration metric: took 399.263099ms for pod "kube-scheduler-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:06.270430  585980 pod_ready.go:40] duration metric: took 1.604271623s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:59:06.319133  585980 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1115 09:59:06.321125  585980 out.go:203] 
	W1115 09:59:06.322520  585980 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1115 09:59:06.323807  585980 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1115 09:59:06.325357  585980 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-335655" cluster and "default" namespace by default
	I1115 09:59:05.214672  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:05.714585  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:06.214843  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:06.714731  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:07.214680  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:07.282745  589862 kubeadm.go:1114] duration metric: took 5.159254951s to wait for elevateKubeSystemPrivileges
	I1115 09:59:07.282789  589862 kubeadm.go:403] duration metric: took 15.301991399s to StartCluster
	I1115 09:59:07.282812  589862 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:59:07.282897  589862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:59:07.284224  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:59:07.284497  589862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 09:59:07.284513  589862 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:59:07.284596  589862 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 09:59:07.284708  589862 config.go:182] Loaded profile config "no-preload-559401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:59:07.284714  589862 addons.go:70] Setting storage-provisioner=true in profile "no-preload-559401"
	I1115 09:59:07.284732  589862 addons.go:70] Setting default-storageclass=true in profile "no-preload-559401"
	I1115 09:59:07.284755  589862 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-559401"
	I1115 09:59:07.284736  589862 addons.go:239] Setting addon storage-provisioner=true in "no-preload-559401"
	I1115 09:59:07.284849  589862 host.go:66] Checking if "no-preload-559401" exists ...
	I1115 09:59:07.285171  589862 cli_runner.go:164] Run: docker container inspect no-preload-559401 --format={{.State.Status}}
	I1115 09:59:07.285363  589862 cli_runner.go:164] Run: docker container inspect no-preload-559401 --format={{.State.Status}}
	I1115 09:59:07.286334  589862 out.go:179] * Verifying Kubernetes components...
	I1115 09:59:07.287959  589862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:59:07.310269  589862 addons.go:239] Setting addon default-storageclass=true in "no-preload-559401"
	I1115 09:59:07.310306  589862 host.go:66] Checking if "no-preload-559401" exists ...
	I1115 09:59:07.310666  589862 cli_runner.go:164] Run: docker container inspect no-preload-559401 --format={{.State.Status}}
	I1115 09:59:07.312231  589862 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:59:07.314067  589862 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:59:07.314089  589862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 09:59:07.314150  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:59:07.347314  589862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa Username:docker}
	I1115 09:59:07.348931  589862 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 09:59:07.349025  589862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 09:59:07.349096  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:59:07.377854  589862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa Username:docker}
	I1115 09:59:07.396020  589862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 09:59:07.440608  589862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:59:07.469330  589862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:59:07.492839  589862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 09:59:07.562143  589862 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1115 09:59:07.563514  589862 node_ready.go:35] waiting up to 6m0s for node "no-preload-559401" to be "Ready" ...
	I1115 09:59:07.788210  589862 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 09:59:04.748782  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:59:04.749213  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:59:04.749269  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:59:04.749319  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:59:04.779723  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:04.779750  539051 cri.go:89] found id: ""
	I1115 09:59:04.779761  539051 logs.go:282] 1 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8]
	I1115 09:59:04.779829  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:04.784483  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:59:04.784558  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:59:04.814230  539051 cri.go:89] found id: ""
	I1115 09:59:04.814253  539051 logs.go:282] 0 containers: []
	W1115 09:59:04.814261  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:59:04.814267  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:59:04.814320  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:59:04.842414  539051 cri.go:89] found id: ""
	I1115 09:59:04.842444  539051 logs.go:282] 0 containers: []
	W1115 09:59:04.842452  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:59:04.842459  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:59:04.842520  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:59:04.871888  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:04.871909  539051 cri.go:89] found id: ""
	I1115 09:59:04.871917  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:59:04.871966  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:04.876258  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:59:04.876324  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:59:04.904787  539051 cri.go:89] found id: ""
	I1115 09:59:04.904809  539051 logs.go:282] 0 containers: []
	W1115 09:59:04.904817  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:59:04.904825  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:59:04.904886  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:59:04.933869  539051 cri.go:89] found id: "7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:04.933892  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:04.933898  539051 cri.go:89] found id: ""
	I1115 09:59:04.933907  539051 logs.go:282] 2 containers: [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:59:04.933968  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:04.938118  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:04.941861  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:59:04.941931  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:59:04.970227  539051 cri.go:89] found id: ""
	I1115 09:59:04.970260  539051 logs.go:282] 0 containers: []
	W1115 09:59:04.970271  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:59:04.970278  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:59:04.970331  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:59:04.997732  539051 cri.go:89] found id: ""
	I1115 09:59:04.997757  539051 logs.go:282] 0 containers: []
	W1115 09:59:04.997764  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:59:04.997788  539051 logs.go:123] Gathering logs for kube-apiserver [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8] ...
	I1115 09:59:04.997803  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:05.032018  539051 logs.go:123] Gathering logs for kube-controller-manager [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe] ...
	I1115 09:59:05.032052  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:05.060554  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:59:05.060584  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:05.088865  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:59:05.088895  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:59:05.178592  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:59:05.178628  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:59:05.195670  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:59:05.195699  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:59:05.258518  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:59:05.258542  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:59:05.258557  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:05.312985  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:59:05.313020  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:59:05.365977  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:59:05.366017  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:59:07.898786  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:59:07.899246  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:59:07.899298  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:59:07.899345  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:59:07.929383  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:07.929422  539051 cri.go:89] found id: ""
	I1115 09:59:07.929433  539051 logs.go:282] 1 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8]
	I1115 09:59:07.929489  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:07.933944  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:59:07.934015  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:59:07.965700  539051 cri.go:89] found id: ""
	I1115 09:59:07.965730  539051 logs.go:282] 0 containers: []
	W1115 09:59:07.965743  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:59:07.965750  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:59:07.965809  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:59:07.994470  539051 cri.go:89] found id: ""
	I1115 09:59:07.994499  539051 logs.go:282] 0 containers: []
	W1115 09:59:07.994509  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:59:07.994519  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:59:07.994578  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:59:08.021550  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:08.021583  539051 cri.go:89] found id: ""
	I1115 09:59:08.021591  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:59:08.021640  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:08.025967  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:59:08.026027  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:59:08.053206  539051 cri.go:89] found id: ""
	I1115 09:59:08.053236  539051 logs.go:282] 0 containers: []
	W1115 09:59:08.053245  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:59:08.053252  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:59:08.053312  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:59:08.081560  539051 cri.go:89] found id: "7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:08.081593  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:08.081599  539051 cri.go:89] found id: ""
	I1115 09:59:08.081609  539051 logs.go:282] 2 containers: [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:59:08.081685  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:08.086335  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:08.090834  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:59:08.090917  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:59:08.120514  539051 cri.go:89] found id: ""
	I1115 09:59:08.120546  539051 logs.go:282] 0 containers: []
	W1115 09:59:08.120556  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:59:08.120566  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:59:08.120642  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:59:08.154644  539051 cri.go:89] found id: ""
	I1115 09:59:08.154671  539051 logs.go:282] 0 containers: []
	W1115 09:59:08.154681  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:59:08.154704  539051 logs.go:123] Gathering logs for kube-controller-manager [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe] ...
	I1115 09:59:08.154719  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:08.184135  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:59:08.184166  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:59:08.219072  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:59:08.219103  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:59:08.279466  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:59:08.279497  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:59:08.279516  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:08.335307  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:59:08.335352  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:08.364915  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:59:08.364949  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:59:08.416132  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:59:08.416183  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:59:08.513500  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:59:08.513538  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:59:08.531227  539051 logs.go:123] Gathering logs for kube-apiserver [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8] ...
	I1115 09:59:08.531256  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:07.789539  589862 addons.go:515] duration metric: took 504.952942ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 09:59:08.066524  589862 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-559401" context rescaled to 1 replicas
	W1115 09:59:09.566911  589862 node_ready.go:57] node "no-preload-559401" has "Ready":"False" status (will retry)
	I1115 09:59:11.067035  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:59:11.067548  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:59:11.067618  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:59:11.067681  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:59:11.095892  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:11.095920  539051 cri.go:89] found id: ""
	I1115 09:59:11.095930  539051 logs.go:282] 1 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8]
	I1115 09:59:11.095982  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:11.100256  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:59:11.100325  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:59:11.132951  539051 cri.go:89] found id: ""
	I1115 09:59:11.132988  539051 logs.go:282] 0 containers: []
	W1115 09:59:11.133000  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:59:11.133009  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:59:11.133075  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:59:11.162608  539051 cri.go:89] found id: ""
	I1115 09:59:11.162631  539051 logs.go:282] 0 containers: []
	W1115 09:59:11.162639  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:59:11.162646  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:59:11.162692  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:59:11.191185  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:11.191210  539051 cri.go:89] found id: ""
	I1115 09:59:11.191220  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:59:11.191283  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:11.195540  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:59:11.195604  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:59:11.223635  539051 cri.go:89] found id: ""
	I1115 09:59:11.223669  539051 logs.go:282] 0 containers: []
	W1115 09:59:11.223681  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:59:11.223689  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:59:11.223761  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:59:11.252109  539051 cri.go:89] found id: "7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:11.252136  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:11.252142  539051 cri.go:89] found id: ""
	I1115 09:59:11.252152  539051 logs.go:282] 2 containers: [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:59:11.252213  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:11.256490  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:11.260452  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:59:11.260517  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:59:11.288347  539051 cri.go:89] found id: ""
	I1115 09:59:11.288370  539051 logs.go:282] 0 containers: []
	W1115 09:59:11.288379  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:59:11.288386  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:59:11.288463  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:59:11.316840  539051 cri.go:89] found id: ""
	I1115 09:59:11.316872  539051 logs.go:282] 0 containers: []
	W1115 09:59:11.316888  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:59:11.316909  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:59:11.316926  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:11.345259  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:59:11.345288  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:59:11.377138  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:59:11.377172  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:59:11.435249  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:59:11.435276  539051 logs.go:123] Gathering logs for kube-controller-manager [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe] ...
	I1115 09:59:11.435299  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:11.462991  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:59:11.463017  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:59:11.511643  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:59:11.511680  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:59:11.598975  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:59:11.599013  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:59:11.615855  539051 logs.go:123] Gathering logs for kube-apiserver [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8] ...
	I1115 09:59:11.615885  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:11.654559  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:59:11.654597  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:14.205487  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:59:14.205938  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:59:14.205990  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:59:14.206036  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:59:14.233815  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:14.233843  539051 cri.go:89] found id: ""
	I1115 09:59:14.233854  539051 logs.go:282] 1 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8]
	I1115 09:59:14.233914  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:14.238686  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:59:14.238762  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:59:14.266849  539051 cri.go:89] found id: ""
	I1115 09:59:14.266874  539051 logs.go:282] 0 containers: []
	W1115 09:59:14.266883  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:59:14.266895  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:59:14.266945  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:59:14.295140  539051 cri.go:89] found id: ""
	I1115 09:59:14.295173  539051 logs.go:282] 0 containers: []
	W1115 09:59:14.295185  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:59:14.295193  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:59:14.295259  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:59:14.323355  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:14.323375  539051 cri.go:89] found id: ""
	I1115 09:59:14.323383  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:59:14.323450  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:14.327639  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:59:14.327704  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:59:14.355633  539051 cri.go:89] found id: ""
	I1115 09:59:14.355656  539051 logs.go:282] 0 containers: []
	W1115 09:59:14.355664  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:59:14.355670  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:59:14.355716  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:59:14.385052  539051 cri.go:89] found id: "7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:14.385072  539051 cri.go:89] found id: ""
	I1115 09:59:14.385080  539051 logs.go:282] 1 containers: [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe]
	I1115 09:59:14.385139  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:14.389214  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:59:14.389278  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:59:14.416441  539051 cri.go:89] found id: ""
	I1115 09:59:14.416474  539051 logs.go:282] 0 containers: []
	W1115 09:59:14.416497  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:59:14.416506  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:59:14.416557  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:59:14.443820  539051 cri.go:89] found id: ""
	I1115 09:59:14.443848  539051 logs.go:282] 0 containers: []
	W1115 09:59:14.443858  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:59:14.443868  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:59:14.443882  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:14.495030  539051 logs.go:123] Gathering logs for kube-controller-manager [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe] ...
	I1115 09:59:14.495076  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:14.522454  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:59:14.522488  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:59:14.572308  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:59:14.572342  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:59:14.603675  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:59:14.603702  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1115 09:59:11.567034  589862 node_ready.go:57] node "no-preload-559401" has "Ready":"False" status (will retry)
	W1115 09:59:14.066799  589862 node_ready.go:57] node "no-preload-559401" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 15 09:59:04 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:04.198795225Z" level=info msg="Starting container: 57f5b907666da39eb9356c7b19ecbe7d1063c9d2cd2ffe49da5ba321b5ac1e5a" id=83d8ef5d-6a1b-4289-922e-1483080bcce4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:59:04 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:04.200898518Z" level=info msg="Started container" PID=2146 containerID=57f5b907666da39eb9356c7b19ecbe7d1063c9d2cd2ffe49da5ba321b5ac1e5a description=kube-system/coredns-5dd5756b68-j8hqh/coredns id=83d8ef5d-6a1b-4289-922e-1483080bcce4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=199d842b40616023e68cb59ef9a9f632822067e834fab2a753b5c2bd8f679dc1
	Nov 15 09:59:06 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:06.767808374Z" level=info msg="Running pod sandbox: default/busybox/POD" id=347cefb6-1e9e-42f1-b3ae-470312b31098 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 09:59:06 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:06.767945826Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:59:06 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:06.774153126Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:133feb5906ceab711fd03f41178e10e496ca7aceac001a91857a97c26c5f6a03 UID:0f8f9c9d-462a-4efa-a9dc-07df32af16c9 NetNS:/var/run/netns/6fb35569-e436-4831-ad00-a90dbfd11563 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004d10a8}] Aliases:map[]}"
	Nov 15 09:59:06 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:06.774191391Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 09:59:06 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:06.784514839Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:133feb5906ceab711fd03f41178e10e496ca7aceac001a91857a97c26c5f6a03 UID:0f8f9c9d-462a-4efa-a9dc-07df32af16c9 NetNS:/var/run/netns/6fb35569-e436-4831-ad00-a90dbfd11563 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004d10a8}] Aliases:map[]}"
	Nov 15 09:59:06 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:06.784651503Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 09:59:06 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:06.785377774Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 15 09:59:06 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:06.786180013Z" level=info msg="Ran pod sandbox 133feb5906ceab711fd03f41178e10e496ca7aceac001a91857a97c26c5f6a03 with infra container: default/busybox/POD" id=347cefb6-1e9e-42f1-b3ae-470312b31098 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 09:59:06 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:06.787562515Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e807bada-3655-472b-a523-80a148054447 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:59:06 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:06.787700596Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e807bada-3655-472b-a523-80a148054447 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:59:06 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:06.787772548Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e807bada-3655-472b-a523-80a148054447 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:59:06 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:06.78843414Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9b07c26e-fe9e-47a9-b0e5-5786e4f5a714 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:59:06 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:06.790041445Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 09:59:08 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:08.824285973Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=9b07c26e-fe9e-47a9-b0e5-5786e4f5a714 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:59:08 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:08.825335325Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=97e7e14c-84a6-4aa1-8b3c-6c24bd6faa80 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:59:08 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:08.826833542Z" level=info msg="Creating container: default/busybox/busybox" id=a1be213c-9b14-4af1-9177-cc152da69e6b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:59:08 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:08.826975423Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:59:08 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:08.830927728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:59:08 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:08.831372009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:59:08 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:08.873731431Z" level=info msg="Created container b5eb63bf7c3a5c69af811ccc130e484ee770ac6a91431392387c1a8d0f09c10a: default/busybox/busybox" id=a1be213c-9b14-4af1-9177-cc152da69e6b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:59:08 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:08.874416053Z" level=info msg="Starting container: b5eb63bf7c3a5c69af811ccc130e484ee770ac6a91431392387c1a8d0f09c10a" id=2ad4d305-2292-4b9a-8232-859f3041cd73 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:59:08 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:08.87664908Z" level=info msg="Started container" PID=2225 containerID=b5eb63bf7c3a5c69af811ccc130e484ee770ac6a91431392387c1a8d0f09c10a description=default/busybox/busybox id=2ad4d305-2292-4b9a-8232-859f3041cd73 name=/runtime.v1.RuntimeService/StartContainer sandboxID=133feb5906ceab711fd03f41178e10e496ca7aceac001a91857a97c26c5f6a03
	Nov 15 09:59:16 old-k8s-version-335655 crio[772]: time="2025-11-15T09:59:16.548794705Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	b5eb63bf7c3a5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   133feb5906cea       busybox                                          default
	57f5b907666da       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   199d842b40616       coredns-5dd5756b68-j8hqh                         kube-system
	30741e423b588       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   79ad6ac27e717       storage-provisioner                              kube-system
	2421a59c285f4       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   f8ffe36e2df10       kindnet-w52sl                                    kube-system
	d7cc1be637c1a       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   8271e15a9a6e3       kube-proxy-ndp6f                                 kube-system
	789a279e7270d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      44 seconds ago      Running             etcd                      0                   bd24770543576       etcd-old-k8s-version-335655                      kube-system
	9cf66dfcebcab       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   1159debb0d2e3       kube-controller-manager-old-k8s-version-335655   kube-system
	786618cfbb0bb       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   c71d0989e6235       kube-scheduler-old-k8s-version-335655            kube-system
	f8891d4b1d534       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   54c12d2beca49       kube-apiserver-old-k8s-version-335655            kube-system
	
	
	==> coredns [57f5b907666da39eb9356c7b19ecbe7d1063c9d2cd2ffe49da5ba321b5ac1e5a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36365 - 63718 "HINFO IN 5424149564165931644.8384931393640021522. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020863045s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-335655
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-335655
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=old-k8s-version-335655
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_58_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:58:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-335655
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:59:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:59:07 +0000   Sat, 15 Nov 2025 09:58:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:59:07 +0000   Sat, 15 Nov 2025 09:58:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:59:07 +0000   Sat, 15 Nov 2025 09:58:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:59:07 +0000   Sat, 15 Nov 2025 09:59:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-335655
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                4f251d42-f2ea-4cb6-8ff2-c94beae7a0fe
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-j8hqh                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-old-k8s-version-335655                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	  kube-system                 kindnet-w52sl                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-335655             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-335655    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-ndp6f                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-335655             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node old-k8s-version-335655 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-335655 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-335655 event: Registered Node old-k8s-version-335655 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-335655 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [789a279e7270dacd901528c1caf267de7344defad89e9af32a16377de98ec7a7] <==
	{"level":"info","ts":"2025-11-15T09:58:33.046407Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-15T09:58:33.046362Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-15T09:58:33.046461Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-15T09:58:33.135298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-15T09:58:33.135361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-15T09:58:33.135416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-15T09:58:33.135436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-15T09:58:33.135444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-15T09:58:33.135456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-15T09:58:33.135469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-15T09:58:33.136238Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-335655 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-15T09:58:33.13624Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T09:58:33.136278Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T09:58:33.136308Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T09:58:33.136476Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-15T09:58:33.136523Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-15T09:58:33.137009Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T09:58:33.137147Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T09:58:33.137187Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-15T09:58:33.137672Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-15T09:58:33.137687Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-15T09:58:47.65989Z","caller":"traceutil/trace.go:171","msg":"trace[1826727589] transaction","detail":"{read_only:false; response_revision:313; number_of_response:1; }","duration":"158.104723ms","start":"2025-11-15T09:58:47.501756Z","end":"2025-11-15T09:58:47.659861Z","steps":["trace[1826727589] 'process raft request'  (duration: 97.867503ms)","trace[1826727589] 'compare'  (duration: 60.034725ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T09:58:47.660002Z","caller":"traceutil/trace.go:171","msg":"trace[1447527101] transaction","detail":"{read_only:false; response_revision:314; number_of_response:1; }","duration":"155.144664ms","start":"2025-11-15T09:58:47.50485Z","end":"2025-11-15T09:58:47.659995Z","steps":["trace[1447527101] 'process raft request'  (duration: 154.917795ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T09:58:47.912177Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.809649ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597074968595703 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" value_size:139 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T09:58:47.912297Z","caller":"traceutil/trace.go:171","msg":"trace[690231223] transaction","detail":"{read_only:false; response_revision:316; number_of_response:1; }","duration":"138.308ms","start":"2025-11-15T09:58:47.773973Z","end":"2025-11-15T09:58:47.912281Z","steps":["trace[690231223] 'compare'  (duration: 130.656295ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:59:18 up  1:41,  0 user,  load average: 2.33, 2.35, 1.62
	Linux old-k8s-version-335655 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2421a59c285f49eb0e84343f8234151ebdca9baaa450771fa60d4289994a0ff6] <==
	I1115 09:58:53.085978       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 09:58:53.086256       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 09:58:53.086421       1 main.go:148] setting mtu 1500 for CNI 
	I1115 09:58:53.086438       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 09:58:53.086460       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T09:58:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 09:58:53.287273       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 09:58:53.287319       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 09:58:53.287329       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 09:58:53.287497       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 09:58:53.684059       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 09:58:53.684107       1 metrics.go:72] Registering metrics
	I1115 09:58:53.684210       1 controller.go:711] "Syncing nftables rules"
	I1115 09:59:03.295566       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 09:59:03.295611       1 main.go:301] handling current node
	I1115 09:59:13.289509       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 09:59:13.289558       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f8891d4b1d534c93fba8369ced7892f5be103096b3e10535fdcebe0717e8fa05] <==
	I1115 09:58:34.346015       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1115 09:58:34.346020       1 aggregator.go:166] initial CRD sync complete...
	I1115 09:58:34.346029       1 autoregister_controller.go:141] Starting autoregister controller
	I1115 09:58:34.346034       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 09:58:34.346041       1 cache.go:39] Caches are synced for autoregister controller
	E1115 09:58:34.346350       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1115 09:58:34.347430       1 controller.go:624] quota admission added evaluator for: namespaces
	I1115 09:58:34.367606       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1115 09:58:34.368625       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1115 09:58:34.550368       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:58:35.251072       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 09:58:35.255120       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 09:58:35.255139       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 09:58:35.758051       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 09:58:35.802773       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 09:58:35.851763       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 09:58:35.857819       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1115 09:58:35.858893       1 controller.go:624] quota admission added evaluator for: endpoints
	I1115 09:58:35.863345       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 09:58:36.272294       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1115 09:58:37.327418       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1115 09:58:37.337355       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 09:58:37.348710       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1115 09:58:49.379163       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1115 09:58:49.581665       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [9cf66dfcebcab80cdb3b78add06538de143caacdcc6c59adc4f9ec2b1a5e378a] <==
	I1115 09:58:49.277061       1 shared_informer.go:318] Caches are synced for attach detach
	I1115 09:58:49.314658       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1115 09:58:49.383451       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1115 09:58:49.592462       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ndp6f"
	I1115 09:58:49.595825       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-w52sl"
	I1115 09:58:49.656713       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 09:58:49.725162       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 09:58:49.725196       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1115 09:58:50.136100       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-tvff5"
	I1115 09:58:50.144754       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-j8hqh"
	I1115 09:58:50.154428       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="770.951862ms"
	I1115 09:58:50.163579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.961385ms"
	I1115 09:58:50.163879       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="108.965µs"
	I1115 09:58:50.164943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.284µs"
	I1115 09:58:50.596287       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1115 09:58:50.610812       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-tvff5"
	I1115 09:58:50.621853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="25.971551ms"
	I1115 09:58:50.638219       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.306213ms"
	I1115 09:58:50.638353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.027µs"
	I1115 09:59:03.845909       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.711µs"
	I1115 09:59:03.862941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.128µs"
	I1115 09:59:04.095518       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1115 09:59:04.492008       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.452µs"
	I1115 09:59:04.519081       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.87133ms"
	I1115 09:59:04.519191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.396µs"
	
	
	==> kube-proxy [d7cc1be637c1a970bc92c70b42c19fd2d8c93be9567a9535eb628b1813b618b8] <==
	I1115 09:58:50.646850       1 server_others.go:69] "Using iptables proxy"
	I1115 09:58:50.659682       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1115 09:58:50.680601       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:58:50.683046       1 server_others.go:152] "Using iptables Proxier"
	I1115 09:58:50.683076       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1115 09:58:50.683082       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1115 09:58:50.683115       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1115 09:58:50.683483       1 server.go:846] "Version info" version="v1.28.0"
	I1115 09:58:50.683508       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:58:50.684088       1 config.go:97] "Starting endpoint slice config controller"
	I1115 09:58:50.684124       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1115 09:58:50.684161       1 config.go:188] "Starting service config controller"
	I1115 09:58:50.684168       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1115 09:58:50.684193       1 config.go:315] "Starting node config controller"
	I1115 09:58:50.684197       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1115 09:58:50.784303       1 shared_informer.go:318] Caches are synced for node config
	I1115 09:58:50.784336       1 shared_informer.go:318] Caches are synced for service config
	I1115 09:58:50.784307       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [786618cfbb0bbf3ca735adca219ec3ebff98d48c532dd288297312f3d3f5b215] <==
	E1115 09:58:34.298534       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1115 09:58:34.298531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1115 09:58:34.298566       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1115 09:58:34.298576       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1115 09:58:34.298585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1115 09:58:34.298598       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 09:58:35.129730       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1115 09:58:35.129773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1115 09:58:35.151654       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1115 09:58:35.151692       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1115 09:58:35.154198       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1115 09:58:35.154227       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1115 09:58:35.184872       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1115 09:58:35.184908       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1115 09:58:35.222245       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1115 09:58:35.222286       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1115 09:58:35.272885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1115 09:58:35.272925       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1115 09:58:35.341965       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1115 09:58:35.342157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1115 09:58:35.511739       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1115 09:58:35.511781       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1115 09:58:35.539543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1115 09:58:35.539657       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1115 09:58:35.894700       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 15 09:58:49 old-k8s-version-335655 kubelet[1395]: I1115 09:58:49.666609    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/44811fde-1c17-472e-9aa0-ffb839e2e4d2-cni-cfg\") pod \"kindnet-w52sl\" (UID: \"44811fde-1c17-472e-9aa0-ffb839e2e4d2\") " pod="kube-system/kindnet-w52sl"
	Nov 15 09:58:49 old-k8s-version-335655 kubelet[1395]: I1115 09:58:49.666696    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/771705b2-6cee-4952-b8b8-c3a6a4d8a4c7-xtables-lock\") pod \"kube-proxy-ndp6f\" (UID: \"771705b2-6cee-4952-b8b8-c3a6a4d8a4c7\") " pod="kube-system/kube-proxy-ndp6f"
	Nov 15 09:58:49 old-k8s-version-335655 kubelet[1395]: I1115 09:58:49.666748    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/771705b2-6cee-4952-b8b8-c3a6a4d8a4c7-lib-modules\") pod \"kube-proxy-ndp6f\" (UID: \"771705b2-6cee-4952-b8b8-c3a6a4d8a4c7\") " pod="kube-system/kube-proxy-ndp6f"
	Nov 15 09:58:49 old-k8s-version-335655 kubelet[1395]: I1115 09:58:49.666789    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44811fde-1c17-472e-9aa0-ffb839e2e4d2-xtables-lock\") pod \"kindnet-w52sl\" (UID: \"44811fde-1c17-472e-9aa0-ffb839e2e4d2\") " pod="kube-system/kindnet-w52sl"
	Nov 15 09:58:49 old-k8s-version-335655 kubelet[1395]: I1115 09:58:49.666817    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44811fde-1c17-472e-9aa0-ffb839e2e4d2-lib-modules\") pod \"kindnet-w52sl\" (UID: \"44811fde-1c17-472e-9aa0-ffb839e2e4d2\") " pod="kube-system/kindnet-w52sl"
	Nov 15 09:58:49 old-k8s-version-335655 kubelet[1395]: E1115 09:58:49.776634    1395 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 15 09:58:49 old-k8s-version-335655 kubelet[1395]: E1115 09:58:49.776681    1395 projected.go:198] Error preparing data for projected volume kube-api-access-545fl for pod kube-system/kindnet-w52sl: configmap "kube-root-ca.crt" not found
	Nov 15 09:58:49 old-k8s-version-335655 kubelet[1395]: E1115 09:58:49.776770    1395 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/44811fde-1c17-472e-9aa0-ffb839e2e4d2-kube-api-access-545fl podName:44811fde-1c17-472e-9aa0-ffb839e2e4d2 nodeName:}" failed. No retries permitted until 2025-11-15 09:58:50.276740513 +0000 UTC m=+12.975318487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-545fl" (UniqueName: "kubernetes.io/projected/44811fde-1c17-472e-9aa0-ffb839e2e4d2-kube-api-access-545fl") pod "kindnet-w52sl" (UID: "44811fde-1c17-472e-9aa0-ffb839e2e4d2") : configmap "kube-root-ca.crt" not found
	Nov 15 09:58:49 old-k8s-version-335655 kubelet[1395]: E1115 09:58:49.777750    1395 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 15 09:58:49 old-k8s-version-335655 kubelet[1395]: E1115 09:58:49.777782    1395 projected.go:198] Error preparing data for projected volume kube-api-access-8xbb4 for pod kube-system/kube-proxy-ndp6f: configmap "kube-root-ca.crt" not found
	Nov 15 09:58:49 old-k8s-version-335655 kubelet[1395]: E1115 09:58:49.777842    1395 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/771705b2-6cee-4952-b8b8-c3a6a4d8a4c7-kube-api-access-8xbb4 podName:771705b2-6cee-4952-b8b8-c3a6a4d8a4c7 nodeName:}" failed. No retries permitted until 2025-11-15 09:58:50.277823238 +0000 UTC m=+12.976401214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8xbb4" (UniqueName: "kubernetes.io/projected/771705b2-6cee-4952-b8b8-c3a6a4d8a4c7-kube-api-access-8xbb4") pod "kube-proxy-ndp6f" (UID: "771705b2-6cee-4952-b8b8-c3a6a4d8a4c7") : configmap "kube-root-ca.crt" not found
	Nov 15 09:58:53 old-k8s-version-335655 kubelet[1395]: I1115 09:58:53.468538    1395 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ndp6f" podStartSLOduration=4.468470981 podCreationTimestamp="2025-11-15 09:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:58:51.463430762 +0000 UTC m=+14.162008738" watchObservedRunningTime="2025-11-15 09:58:53.468470981 +0000 UTC m=+16.167048960"
	Nov 15 09:58:53 old-k8s-version-335655 kubelet[1395]: I1115 09:58:53.468695    1395 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-w52sl" podStartSLOduration=2.1372696429999998 podCreationTimestamp="2025-11-15 09:58:49 +0000 UTC" firstStartedPulling="2025-11-15 09:58:50.518865802 +0000 UTC m=+13.217443772" lastFinishedPulling="2025-11-15 09:58:52.85027078 +0000 UTC m=+15.548848750" observedRunningTime="2025-11-15 09:58:53.468430315 +0000 UTC m=+16.167008295" watchObservedRunningTime="2025-11-15 09:58:53.468674621 +0000 UTC m=+16.167252597"
	Nov 15 09:59:03 old-k8s-version-335655 kubelet[1395]: I1115 09:59:03.820946    1395 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 15 09:59:03 old-k8s-version-335655 kubelet[1395]: I1115 09:59:03.844415    1395 topology_manager.go:215] "Topology Admit Handler" podUID="af2a330d-a530-455d-a428-c27df3d4ff47" podNamespace="kube-system" podName="storage-provisioner"
	Nov 15 09:59:03 old-k8s-version-335655 kubelet[1395]: I1115 09:59:03.845554    1395 topology_manager.go:215] "Topology Admit Handler" podUID="e2853043-8da1-44cd-b87b-51cecce5b801" podNamespace="kube-system" podName="coredns-5dd5756b68-j8hqh"
	Nov 15 09:59:03 old-k8s-version-335655 kubelet[1395]: I1115 09:59:03.868835    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqqxc\" (UniqueName: \"kubernetes.io/projected/e2853043-8da1-44cd-b87b-51cecce5b801-kube-api-access-kqqxc\") pod \"coredns-5dd5756b68-j8hqh\" (UID: \"e2853043-8da1-44cd-b87b-51cecce5b801\") " pod="kube-system/coredns-5dd5756b68-j8hqh"
	Nov 15 09:59:03 old-k8s-version-335655 kubelet[1395]: I1115 09:59:03.868883    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/af2a330d-a530-455d-a428-c27df3d4ff47-tmp\") pod \"storage-provisioner\" (UID: \"af2a330d-a530-455d-a428-c27df3d4ff47\") " pod="kube-system/storage-provisioner"
	Nov 15 09:59:03 old-k8s-version-335655 kubelet[1395]: I1115 09:59:03.868905    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28brp\" (UniqueName: \"kubernetes.io/projected/af2a330d-a530-455d-a428-c27df3d4ff47-kube-api-access-28brp\") pod \"storage-provisioner\" (UID: \"af2a330d-a530-455d-a428-c27df3d4ff47\") " pod="kube-system/storage-provisioner"
	Nov 15 09:59:03 old-k8s-version-335655 kubelet[1395]: I1115 09:59:03.868949    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2853043-8da1-44cd-b87b-51cecce5b801-config-volume\") pod \"coredns-5dd5756b68-j8hqh\" (UID: \"e2853043-8da1-44cd-b87b-51cecce5b801\") " pod="kube-system/coredns-5dd5756b68-j8hqh"
	Nov 15 09:59:04 old-k8s-version-335655 kubelet[1395]: I1115 09:59:04.492034    1395 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-j8hqh" podStartSLOduration=14.491982709 podCreationTimestamp="2025-11-15 09:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:59:04.491828275 +0000 UTC m=+27.190406253" watchObservedRunningTime="2025-11-15 09:59:04.491982709 +0000 UTC m=+27.190560687"
	Nov 15 09:59:04 old-k8s-version-335655 kubelet[1395]: I1115 09:59:04.502320    1395 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.502268545 podCreationTimestamp="2025-11-15 09:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:59:04.501618345 +0000 UTC m=+27.200196324" watchObservedRunningTime="2025-11-15 09:59:04.502268545 +0000 UTC m=+27.200846524"
	Nov 15 09:59:06 old-k8s-version-335655 kubelet[1395]: I1115 09:59:06.465274    1395 topology_manager.go:215] "Topology Admit Handler" podUID="0f8f9c9d-462a-4efa-a9dc-07df32af16c9" podNamespace="default" podName="busybox"
	Nov 15 09:59:06 old-k8s-version-335655 kubelet[1395]: I1115 09:59:06.485792    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hzsk\" (UniqueName: \"kubernetes.io/projected/0f8f9c9d-462a-4efa-a9dc-07df32af16c9-kube-api-access-6hzsk\") pod \"busybox\" (UID: \"0f8f9c9d-462a-4efa-a9dc-07df32af16c9\") " pod="default/busybox"
	Nov 15 09:59:09 old-k8s-version-335655 kubelet[1395]: I1115 09:59:09.507483    1395 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.470686611 podCreationTimestamp="2025-11-15 09:59:06 +0000 UTC" firstStartedPulling="2025-11-15 09:59:06.788024971 +0000 UTC m=+29.486602931" lastFinishedPulling="2025-11-15 09:59:08.824704587 +0000 UTC m=+31.523282547" observedRunningTime="2025-11-15 09:59:09.507274065 +0000 UTC m=+32.205852044" watchObservedRunningTime="2025-11-15 09:59:09.507366227 +0000 UTC m=+32.205944206"
	
	
	==> storage-provisioner [30741e423b588cbb5539d0e04d6ae2337df32aeae9a5848f6da75c2129d1b01a] <==
	I1115 09:59:04.207075       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 09:59:04.216583       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 09:59:04.216627       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1115 09:59:04.225369       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 09:59:04.225482       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40ccfd94-ee2b-478f-91d9-d71b353df891", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-335655_3aa4f3fa-f574-40db-9e6d-fa30030c2d71 became leader
	I1115 09:59:04.225549       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-335655_3aa4f3fa-f574-40db-9e6d-fa30030c2d71!
	I1115 09:59:04.326033       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-335655_3aa4f3fa-f574-40db-9e6d-fa30030c2d71!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335655 -n old-k8s-version-335655
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-335655 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-559401 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-559401 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (245.15231ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:59:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-559401 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-559401 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-559401 describe deploy/metrics-server -n kube-system: exit status 1 (59.371401ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-559401 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-559401
helpers_test.go:243: (dbg) docker inspect no-preload-559401:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e",
	        "Created": "2025-11-15T09:58:30.798243596Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 590318,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:58:30.835729994Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e/hostname",
	        "HostsPath": "/var/lib/docker/containers/96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e/hosts",
	        "LogPath": "/var/lib/docker/containers/96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e/96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e-json.log",
	        "Name": "/no-preload-559401",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-559401:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-559401",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e",
	                "LowerDir": "/var/lib/docker/overlay2/164a1a1235fac955785744348ec2ac413956b6a413469f5c1a071ecc18e0b87f-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/164a1a1235fac955785744348ec2ac413956b6a413469f5c1a071ecc18e0b87f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/164a1a1235fac955785744348ec2ac413956b6a413469f5c1a071ecc18e0b87f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/164a1a1235fac955785744348ec2ac413956b6a413469f5c1a071ecc18e0b87f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-559401",
	                "Source": "/var/lib/docker/volumes/no-preload-559401/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-559401",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-559401",
	                "name.minikube.sigs.k8s.io": "no-preload-559401",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0f8538fd61d45b9c12b9881c6323f24d762ad58ecc78bbc480e29c6999c87b73",
	            "SandboxKey": "/var/run/docker/netns/0f8538fd61d4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-559401": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9778bfb33840535be1dad946c45c61cf82a33a723dc88bd05e11d71cf2fc0a9f",
	                    "EndpointID": "36ce0a78bf4b2767c3c659727cfae9a8ab7573721a565ba9702f36342ba49701",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "92:6f:50:54:a8:df",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-559401",
	                        "96bf94e265be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-559401 -n no-preload-559401
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-559401 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-559401 logs -n 25: (1.08261104s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p NoKubernetes-941483 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ start   │ -p pause-717282 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-717282              │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ pause   │ -p pause-717282 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-717282              │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ delete  │ -p pause-717282                                                                                                                                                                                                                               │ pause-717282              │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ delete  │ -p force-systemd-env-450177                                                                                                                                                                                                                   │ force-systemd-env-450177  │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ start   │ -p force-systemd-flag-896620 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-896620 │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ start   │ -p cert-expiration-341243 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-341243    │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:58 UTC │
	│ delete  │ -p NoKubernetes-941483                                                                                                                                                                                                                        │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ start   │ -p NoKubernetes-941483 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:57 UTC │
	│ ssh     │ -p NoKubernetes-941483 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │                     │
	│ ssh     │ force-systemd-flag-896620 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-896620 │ jenkins │ v1.37.0 │ 15 Nov 25 09:57 UTC │ 15 Nov 25 09:58 UTC │
	│ delete  │ -p force-systemd-flag-896620                                                                                                                                                                                                                  │ force-systemd-flag-896620 │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p cert-options-759344 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ stop    │ -p NoKubernetes-941483                                                                                                                                                                                                                        │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p NoKubernetes-941483 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ ssh     │ -p NoKubernetes-941483 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │                     │
	│ delete  │ -p NoKubernetes-941483                                                                                                                                                                                                                        │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p old-k8s-version-335655 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:59 UTC │
	│ ssh     │ cert-options-759344 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ ssh     │ -p cert-options-759344 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ delete  │ -p cert-options-759344                                                                                                                                                                                                                        │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-559401         │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:59 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-335655 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │                     │
	│ stop    │ -p old-k8s-version-335655 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-559401 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-559401         │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:58:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:58:29.874516  589862 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:58:29.874820  589862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:58:29.874832  589862 out.go:374] Setting ErrFile to fd 2...
	I1115 09:58:29.874838  589862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:58:29.875092  589862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:58:29.875635  589862 out.go:368] Setting JSON to false
	I1115 09:58:29.876824  589862 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6051,"bootTime":1763194659,"procs":285,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:58:29.876941  589862 start.go:143] virtualization: kvm guest
	I1115 09:58:29.879225  589862 out.go:179] * [no-preload-559401] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:58:29.880796  589862 notify.go:221] Checking for updates...
	I1115 09:58:29.880848  589862 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:58:29.882225  589862 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:58:29.883821  589862 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:58:29.885862  589862 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 09:58:29.887184  589862 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:58:29.889102  589862 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:58:29.890994  589862 config.go:182] Loaded profile config "cert-expiration-341243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:58:29.891132  589862 config.go:182] Loaded profile config "kubernetes-upgrade-405833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:58:29.891265  589862 config.go:182] Loaded profile config "old-k8s-version-335655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 09:58:29.891417  589862 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:58:29.917974  589862 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:58:29.918150  589862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:58:29.984949  589862 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 09:58:29.974075987 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:58:29.985133  589862 docker.go:319] overlay module found
	I1115 09:58:29.987254  589862 out.go:179] * Using the docker driver based on user configuration
	I1115 09:58:29.988613  589862 start.go:309] selected driver: docker
	I1115 09:58:29.988636  589862 start.go:930] validating driver "docker" against <nil>
	I1115 09:58:29.988651  589862 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:58:29.989314  589862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:58:30.056142  589862 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 09:58:30.044702878 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:58:30.056331  589862 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:58:30.056639  589862 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:58:30.058568  589862 out.go:179] * Using Docker driver with root privileges
	I1115 09:58:30.059840  589862 cni.go:84] Creating CNI manager for ""
	I1115 09:58:30.059920  589862 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:58:30.059939  589862 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 09:58:30.060019  589862 start.go:353] cluster config:
	{Name:no-preload-559401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-559401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:58:30.061582  589862 out.go:179] * Starting "no-preload-559401" primary control-plane node in "no-preload-559401" cluster
	I1115 09:58:30.062897  589862 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:58:30.064280  589862 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:58:30.065517  589862 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:58:30.065605  589862 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:58:30.065633  589862 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/config.json ...
	I1115 09:58:30.065669  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/config.json: {Name:mkfae10aca1bc64f8ae312397b6f0f9d7f37cf88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:30.065861  589862 cache.go:107] acquiring lock: {Name:mk5f28db5350cb83d4ee10bd319ac89dc2575176 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.065863  589862 cache.go:107] acquiring lock: {Name:mk20541f119eb4401d674cb4e354d83b40cb36ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.065911  589862 cache.go:107] acquiring lock: {Name:mk8a811e12b56d44de920eef87a9a4aec36ca449 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.065914  589862 cache.go:107] acquiring lock: {Name:mk0dbec31b80757040ed2efbb15c656d1127a225 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.065958  589862 cache.go:115] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1115 09:58:30.065968  589862 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 123.702µs
	I1115 09:58:30.065978  589862 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1115 09:58:30.065872  589862 cache.go:107] acquiring lock: {Name:mk54eb1701531b2aef5f1854448ea61e0b50dc7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.065960  589862 cache.go:107] acquiring lock: {Name:mk3ddfe2b5843c63ea691168ffaaf34627ed6f51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.065988  589862 cache.go:107] acquiring lock: {Name:mka82434b9fd38bdfc8ba016f803ffb7c71c9f8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.065996  589862 cache.go:107] acquiring lock: {Name:mkffdd5e68593188f2779fed2aafa94b93d50fb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.066040  589862 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:30.066054  589862 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1115 09:58:30.066094  589862 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:30.066184  589862 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:30.066192  589862 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:30.066261  589862 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:30.066326  589862 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:30.067602  589862 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:30.067670  589862 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1115 09:58:30.067686  589862 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:30.067670  589862 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:30.067695  589862 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:30.067610  589862 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:30.067739  589862 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:30.091045  589862 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 09:58:30.091070  589862 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 09:58:30.091086  589862 cache.go:243] Successfully downloaded all kic artifacts
	I1115 09:58:30.091112  589862 start.go:360] acquireMachinesLock for no-preload-559401: {Name:mk95ac24bdde539f9c4d5f16eaa9bc055d55114d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 09:58:30.091226  589862 start.go:364] duration metric: took 88.763µs to acquireMachinesLock for "no-preload-559401"
	I1115 09:58:30.091258  589862 start.go:93] Provisioning new machine with config: &{Name:no-preload-559401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-559401 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:58:30.091348  589862 start.go:125] createHost starting for "" (driver="docker")
	I1115 09:58:28.127073  585980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt ...
	I1115 09:58:28.127104  585980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt: {Name:mk1dc0830bf8ce637f791a39fc95fd42778d3198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:28.127283  585980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.key ...
	I1115 09:58:28.127295  585980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.key: {Name:mkf292a6df394d42f7d220fab6b3746567ae37f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:28.127381  585980 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.key.b843a3bb
	I1115 09:58:28.127417  585980 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.crt.b843a3bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1115 09:58:28.185179  585980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.crt.b843a3bb ...
	I1115 09:58:28.185210  585980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.crt.b843a3bb: {Name:mk406e350629ae2fcd80883d9376b7d11bea8e85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:28.185380  585980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.key.b843a3bb ...
	I1115 09:58:28.185420  585980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.key.b843a3bb: {Name:mkbf51d04e1156dc6394f165e55405a0439bcd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:28.185530  585980 certs.go:382] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.crt.b843a3bb -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.crt
	I1115 09:58:28.185641  585980 certs.go:386] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.key.b843a3bb -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.key
	I1115 09:58:28.185736  585980 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.key
	I1115 09:58:28.185761  585980 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.crt with IP's: []
	I1115 09:58:28.377041  585980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.crt ...
	I1115 09:58:28.377078  585980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.crt: {Name:mk232926b85d201a97a0d79ea38308091e816d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:28.377277  585980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.key ...
	I1115 09:58:28.377297  585980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.key: {Name:mkc65c93a414b464314b39175815b9bf5583609b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:28.377526  585980 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:58:28.377587  585980 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:58:28.377601  585980 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:58:28.377642  585980 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:58:28.377680  585980 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:58:28.377720  585980 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:58:28.377781  585980 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:58:28.378360  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:58:28.397666  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:58:28.415298  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:58:28.433731  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:58:28.451682  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 09:58:28.469885  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 09:58:28.487862  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:58:28.505444  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 09:58:28.524769  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:58:28.544041  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:58:28.562599  585980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:58:28.579861  585980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:58:28.593228  585980 ssh_runner.go:195] Run: openssl version
	I1115 09:58:28.599738  585980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:58:28.608404  585980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:58:28.612119  585980 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:58:28.612172  585980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:58:28.647736  585980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:58:28.657115  585980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:58:28.666082  585980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:58:28.670350  585980 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:58:28.670449  585980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:58:28.706327  585980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:58:28.715938  585980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:58:28.724867  585980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:58:28.728703  585980 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:58:28.728761  585980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:58:28.763128  585980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:58:28.772171  585980 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:58:28.775805  585980 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:58:28.775858  585980 kubeadm.go:401] StartCluster: {Name:old-k8s-version-335655 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-335655 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:58:28.775949  585980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:58:28.776014  585980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:58:28.809321  585980 cri.go:89] found id: ""
	I1115 09:58:28.809409  585980 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:58:28.820377  585980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:58:28.831584  585980 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 09:58:28.831648  585980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:58:28.842237  585980 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 09:58:28.842257  585980 kubeadm.go:158] found existing configuration files:
	
	I1115 09:58:28.842304  585980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 09:58:28.852331  585980 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 09:58:28.852413  585980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 09:58:28.861894  585980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 09:58:28.871092  585980 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 09:58:28.871159  585980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:58:28.879747  585980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 09:58:28.889243  585980 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 09:58:28.889307  585980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:58:28.897561  585980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 09:58:28.907063  585980 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 09:58:28.907127  585980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:58:28.918024  585980 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 09:58:29.012542  585980 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 09:58:29.090187  585980 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 09:58:31.984459  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:58:31.984977  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:58:31.985038  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:58:31.985116  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:58:32.014564  539051 cri.go:89] found id: "6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:32.014603  539051 cri.go:89] found id: ""
	I1115 09:58:32.014616  539051 logs.go:282] 1 containers: [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83]
	I1115 09:58:32.014682  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:32.018959  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:58:32.019043  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:58:32.048192  539051 cri.go:89] found id: ""
	I1115 09:58:32.048221  539051 logs.go:282] 0 containers: []
	W1115 09:58:32.048233  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:58:32.048242  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:58:32.048297  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:58:32.079490  539051 cri.go:89] found id: ""
	I1115 09:58:32.079514  539051 logs.go:282] 0 containers: []
	W1115 09:58:32.079522  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:58:32.079530  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:58:32.079585  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:58:32.111329  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:32.111355  539051 cri.go:89] found id: ""
	I1115 09:58:32.111366  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:58:32.111453  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:32.118840  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:58:32.118918  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:58:32.155586  539051 cri.go:89] found id: ""
	I1115 09:58:32.155616  539051 logs.go:282] 0 containers: []
	W1115 09:58:32.155626  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:58:32.155634  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:58:32.155697  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:58:32.186730  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:32.186760  539051 cri.go:89] found id: ""
	I1115 09:58:32.186770  539051 logs.go:282] 1 containers: [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:58:32.186837  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:32.191286  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:58:32.191350  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:58:32.218750  539051 cri.go:89] found id: ""
	I1115 09:58:32.218780  539051 logs.go:282] 0 containers: []
	W1115 09:58:32.218791  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:58:32.218800  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:58:32.218871  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:58:32.247632  539051 cri.go:89] found id: ""
	I1115 09:58:32.247659  539051 logs.go:282] 0 containers: []
	W1115 09:58:32.247668  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:58:32.247681  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:58:32.247695  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:58:32.292636  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:58:32.292679  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:58:32.325839  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:58:32.325868  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:58:32.408867  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:58:32.408905  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:58:32.426739  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:58:32.426774  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:58:32.496095  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:58:32.496113  539051 logs.go:123] Gathering logs for kube-apiserver [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83] ...
	I1115 09:58:32.496126  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:32.527334  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:58:32.527365  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:32.579145  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:58:32.579201  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:30.093531  589862 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 09:58:30.093811  589862 start.go:159] libmachine.API.Create for "no-preload-559401" (driver="docker")
	I1115 09:58:30.093852  589862 client.go:173] LocalClient.Create starting
	I1115 09:58:30.093933  589862 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem
	I1115 09:58:30.093982  589862 main.go:143] libmachine: Decoding PEM data...
	I1115 09:58:30.094003  589862 main.go:143] libmachine: Parsing certificate...
	I1115 09:58:30.094071  589862 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem
	I1115 09:58:30.094097  589862 main.go:143] libmachine: Decoding PEM data...
	I1115 09:58:30.094114  589862 main.go:143] libmachine: Parsing certificate...
	I1115 09:58:30.094582  589862 cli_runner.go:164] Run: docker network inspect no-preload-559401 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 09:58:30.113798  589862 cli_runner.go:211] docker network inspect no-preload-559401 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 09:58:30.113867  589862 network_create.go:284] running [docker network inspect no-preload-559401] to gather additional debugging logs...
	I1115 09:58:30.113885  589862 cli_runner.go:164] Run: docker network inspect no-preload-559401
	W1115 09:58:30.133288  589862 cli_runner.go:211] docker network inspect no-preload-559401 returned with exit code 1
	I1115 09:58:30.133326  589862 network_create.go:287] error running [docker network inspect no-preload-559401]: docker network inspect no-preload-559401: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-559401 not found
	I1115 09:58:30.133356  589862 network_create.go:289] output of [docker network inspect no-preload-559401]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-559401 not found
	
	** /stderr **
	I1115 09:58:30.133512  589862 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:58:30.154570  589862 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a8fb985664d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:ab:70:dd:9f:65} reservation:<nil>}
	I1115 09:58:30.155902  589862 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cc9c79f9c19e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:9a:52:90:2e:14} reservation:<nil>}
	I1115 09:58:30.156422  589862 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-309565720ebf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:66:38:13:6a:5d} reservation:<nil>}
	I1115 09:58:30.156864  589862 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4664d9872852 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a2:5a:7a:5f:0d:bf} reservation:<nil>}
	I1115 09:58:30.157366  589862 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5f22abf6c460 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:3a:fa:c2:83:36:45} reservation:<nil>}
	I1115 09:58:30.157898  589862 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-b93b691a24ad IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:12:3c:53:f1:ac:76} reservation:<nil>}
	I1115 09:58:30.158603  589862 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d8b620}
	I1115 09:58:30.158625  589862 network_create.go:124] attempt to create docker network no-preload-559401 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1115 09:58:30.158685  589862 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-559401 no-preload-559401
	I1115 09:58:30.212478  589862 network_create.go:108] docker network no-preload-559401 192.168.103.0/24 created
	I1115 09:58:30.212513  589862 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-559401" container
	I1115 09:58:30.212589  589862 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 09:58:30.231647  589862 cli_runner.go:164] Run: docker volume create no-preload-559401 --label name.minikube.sigs.k8s.io=no-preload-559401 --label created_by.minikube.sigs.k8s.io=true
	I1115 09:58:30.233919  589862 cache.go:162] opening:  /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1115 09:58:30.242312  589862 cache.go:162] opening:  /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1115 09:58:30.243685  589862 cache.go:162] opening:  /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1115 09:58:30.253127  589862 oci.go:103] Successfully created a docker volume no-preload-559401
	I1115 09:58:30.253217  589862 cli_runner.go:164] Run: docker run --rm --name no-preload-559401-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-559401 --entrypoint /usr/bin/test -v no-preload-559401:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 09:58:30.258087  589862 cache.go:162] opening:  /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1115 09:58:30.268771  589862 cache.go:162] opening:  /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1115 09:58:30.278033  589862 cache.go:162] opening:  /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1115 09:58:30.283432  589862 cache.go:162] opening:  /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1115 09:58:30.356888  589862 cache.go:157] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1115 09:58:30.356918  589862 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 291.008829ms
	I1115 09:58:30.356934  589862 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1115 09:58:30.705162  589862 cache.go:157] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1115 09:58:30.705196  589862 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 639.345096ms
	I1115 09:58:30.705212  589862 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1115 09:58:30.723157  589862 oci.go:107] Successfully prepared a docker volume no-preload-559401
	I1115 09:58:30.723196  589862 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1115 09:58:30.723272  589862 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1115 09:58:30.723299  589862 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1115 09:58:30.723343  589862 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 09:58:30.779896  589862 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-559401 --name no-preload-559401 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-559401 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-559401 --network no-preload-559401 --ip 192.168.103.2 --volume no-preload-559401:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 09:58:31.140454  589862 cli_runner.go:164] Run: docker container inspect no-preload-559401 --format={{.State.Running}}
	I1115 09:58:31.163830  589862 cli_runner.go:164] Run: docker container inspect no-preload-559401 --format={{.State.Status}}
	I1115 09:58:31.184769  589862 cli_runner.go:164] Run: docker exec no-preload-559401 stat /var/lib/dpkg/alternatives/iptables
	I1115 09:58:31.238067  589862 oci.go:144] the created container "no-preload-559401" has a running status.
	I1115 09:58:31.238095  589862 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa...
	I1115 09:58:31.286215  589862 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 09:58:31.329772  589862 cli_runner.go:164] Run: docker container inspect no-preload-559401 --format={{.State.Status}}
	I1115 09:58:31.353370  589862 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 09:58:31.353415  589862 kic_runner.go:114] Args: [docker exec --privileged no-preload-559401 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 09:58:31.404309  589862 cli_runner.go:164] Run: docker container inspect no-preload-559401 --format={{.State.Status}}
	I1115 09:58:31.429636  589862 machine.go:94] provisionDockerMachine start ...
	I1115 09:58:31.429740  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:31.454560  589862 main.go:143] libmachine: Using SSH client type: native
	I1115 09:58:31.454909  589862 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1115 09:58:31.454934  589862 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 09:58:31.455868  589862 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37522->127.0.0.1:33434: read: connection reset by peer
	I1115 09:58:31.708803  589862 cache.go:157] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1115 09:58:31.708901  589862 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.642916341s
	I1115 09:58:31.708923  589862 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1115 09:58:31.773506  589862 cache.go:157] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1115 09:58:31.773547  589862 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.707693668s
	I1115 09:58:31.773571  589862 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1115 09:58:31.775573  589862 cache.go:157] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1115 09:58:31.775605  589862 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.70964456s
	I1115 09:58:31.775622  589862 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1115 09:58:31.815339  589862 cache.go:157] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1115 09:58:31.815369  589862 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.749457567s
	I1115 09:58:31.815384  589862 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1115 09:58:32.163494  589862 cache.go:157] /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1115 09:58:32.163531  589862 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.097603485s
	I1115 09:58:32.163547  589862 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1115 09:58:32.163567  589862 cache.go:87] Successfully saved all images to host disk.
	I1115 09:58:34.594835  589862 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-559401
	
	I1115 09:58:34.594884  589862 ubuntu.go:182] provisioning hostname "no-preload-559401"
	I1115 09:58:34.594968  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:34.613706  589862 main.go:143] libmachine: Using SSH client type: native
	I1115 09:58:34.613933  589862 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1115 09:58:34.613948  589862 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-559401 && echo "no-preload-559401" | sudo tee /etc/hostname
	I1115 09:58:34.755990  589862 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-559401
	
	I1115 09:58:34.756080  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:34.775532  589862 main.go:143] libmachine: Using SSH client type: native
	I1115 09:58:34.775774  589862 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1115 09:58:34.775792  589862 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-559401' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-559401/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-559401' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 09:58:34.906384  589862 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 09:58:34.906444  589862 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 09:58:34.906476  589862 ubuntu.go:190] setting up certificates
	I1115 09:58:34.906500  589862 provision.go:84] configureAuth start
	I1115 09:58:34.906581  589862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-559401
	I1115 09:58:34.926843  589862 provision.go:143] copyHostCerts
	I1115 09:58:34.926916  589862 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 09:58:34.926932  589862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 09:58:34.927018  589862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 09:58:34.927134  589862 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 09:58:34.927146  589862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 09:58:34.927189  589862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 09:58:34.927267  589862 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 09:58:34.927277  589862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 09:58:34.927315  589862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 09:58:34.927404  589862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.no-preload-559401 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-559401]
	I1115 09:58:35.219274  589862 provision.go:177] copyRemoteCerts
	I1115 09:58:35.219345  589862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 09:58:35.219409  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:35.241686  589862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa Username:docker}
	I1115 09:58:35.345507  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 09:58:35.370231  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 09:58:35.394192  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 09:58:35.418598  589862 provision.go:87] duration metric: took 512.076108ms to configureAuth
	I1115 09:58:35.418731  589862 ubuntu.go:206] setting minikube options for container-runtime
	I1115 09:58:35.418944  589862 config.go:182] Loaded profile config "no-preload-559401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:58:35.419062  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:35.443736  589862 main.go:143] libmachine: Using SSH client type: native
	I1115 09:58:35.444029  589862 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1115 09:58:35.444062  589862 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 09:58:35.720578  589862 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 09:58:35.720603  589862 machine.go:97] duration metric: took 4.290945095s to provisionDockerMachine
	I1115 09:58:35.720615  589862 client.go:176] duration metric: took 5.626752422s to LocalClient.Create
	I1115 09:58:35.720639  589862 start.go:167] duration metric: took 5.626830168s to libmachine.API.Create "no-preload-559401"
	I1115 09:58:35.720653  589862 start.go:293] postStartSetup for "no-preload-559401" (driver="docker")
	I1115 09:58:35.720665  589862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 09:58:35.720742  589862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 09:58:35.720798  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:35.743973  589862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa Username:docker}
	I1115 09:58:35.847187  589862 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 09:58:35.851307  589862 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 09:58:35.851341  589862 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 09:58:35.851355  589862 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 09:58:35.851432  589862 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 09:58:35.851531  589862 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 09:58:35.851662  589862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 09:58:35.861180  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:58:35.882717  589862 start.go:296] duration metric: took 162.047003ms for postStartSetup
	I1115 09:58:35.883027  589862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-559401
	I1115 09:58:35.902579  589862 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/config.json ...
	I1115 09:58:35.902870  589862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:58:35.902915  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:35.921170  589862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa Username:docker}
	I1115 09:58:36.013132  589862 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 09:58:36.018024  589862 start.go:128] duration metric: took 5.926656055s to createHost
	I1115 09:58:36.018051  589862 start.go:83] releasing machines lock for "no-preload-559401", held for 5.926812114s
	I1115 09:58:36.018127  589862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-559401
	I1115 09:58:36.036935  589862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 09:58:36.036996  589862 ssh_runner.go:195] Run: cat /version.json
	I1115 09:58:36.037045  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:36.037050  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:58:36.057132  589862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa Username:docker}
	I1115 09:58:36.057173  589862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa Username:docker}
	I1115 09:58:36.206632  589862 ssh_runner.go:195] Run: systemctl --version
	I1115 09:58:36.213577  589862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 09:58:36.248844  589862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 09:58:36.253674  589862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 09:58:36.253747  589862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 09:58:36.282619  589862 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 09:58:36.282648  589862 start.go:496] detecting cgroup driver to use...
	I1115 09:58:36.282686  589862 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 09:58:36.282756  589862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 09:58:36.302762  589862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 09:58:36.318948  589862 docker.go:218] disabling cri-docker service (if available) ...
	I1115 09:58:36.319021  589862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 09:58:36.339460  589862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 09:58:36.359517  589862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 09:58:36.445215  589862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 09:58:36.535533  589862 docker.go:234] disabling docker service ...
	I1115 09:58:36.535609  589862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 09:58:36.556104  589862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 09:58:36.569646  589862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 09:58:36.656598  589862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 09:58:36.738249  589862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 09:58:36.751511  589862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 09:58:36.766513  589862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 09:58:36.766587  589862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:58:36.776944  589862 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 09:58:36.777003  589862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:58:36.786248  589862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:58:36.795829  589862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:58:36.805386  589862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 09:58:36.813681  589862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:58:36.822725  589862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:58:36.837041  589862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 09:58:36.846317  589862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 09:58:36.854047  589862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 09:58:36.862245  589862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:58:36.949639  589862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 09:58:37.066568  589862 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 09:58:37.066643  589862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 09:58:37.071088  589862 start.go:564] Will wait 60s for crictl version
	I1115 09:58:37.071149  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.074966  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 09:58:37.102164  589862 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 09:58:37.102254  589862 ssh_runner.go:195] Run: crio --version
	I1115 09:58:37.135101  589862 ssh_runner.go:195] Run: crio --version
	I1115 09:58:37.168824  589862 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 09:58:37.522328  585980 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1115 09:58:37.522427  585980 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 09:58:37.522571  585980 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 09:58:37.522651  585980 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 09:58:37.522703  585980 kubeadm.go:319] OS: Linux
	I1115 09:58:37.522774  585980 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 09:58:37.522840  585980 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 09:58:37.522913  585980 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 09:58:37.522985  585980 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 09:58:37.523056  585980 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 09:58:37.523125  585980 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 09:58:37.523191  585980 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 09:58:37.523249  585980 kubeadm.go:319] CGROUPS_IO: enabled
	I1115 09:58:37.523357  585980 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 09:58:37.523508  585980 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 09:58:37.523625  585980 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1115 09:58:37.523716  585980 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 09:58:37.529522  585980 out.go:252]   - Generating certificates and keys ...
	I1115 09:58:37.529673  585980 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 09:58:37.529789  585980 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 09:58:37.529897  585980 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 09:58:37.529982  585980 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 09:58:37.530065  585980 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 09:58:37.530144  585980 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 09:58:37.530223  585980 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 09:58:37.530403  585980 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-335655] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 09:58:37.530477  585980 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 09:58:37.530640  585980 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-335655] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 09:58:37.530744  585980 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 09:58:37.530838  585980 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 09:58:37.530904  585980 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 09:58:37.530979  585980 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 09:58:37.531046  585980 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 09:58:37.531119  585980 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 09:58:37.531208  585980 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 09:58:37.531289  585980 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 09:58:37.531446  585980 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 09:58:37.531531  585980 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 09:58:37.533125  585980 out.go:252]   - Booting up control plane ...
	I1115 09:58:37.533639  585980 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 09:58:37.533800  585980 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 09:58:37.533894  585980 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 09:58:37.534052  585980 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 09:58:37.534247  585980 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 09:58:37.534337  585980 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 09:58:37.534571  585980 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1115 09:58:37.535356  585980 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.002460 seconds
	I1115 09:58:37.535637  585980 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 09:58:37.535818  585980 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 09:58:37.535902  585980 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 09:58:37.536175  585980 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-335655 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 09:58:37.536257  585980 kubeadm.go:319] [bootstrap-token] Using token: olz1a2.naoibbsbc9ube8ph
	I1115 09:58:37.541994  585980 out.go:252]   - Configuring RBAC rules ...
	I1115 09:58:37.542143  585980 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 09:58:37.542254  585980 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 09:58:37.542498  585980 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 09:58:37.542676  585980 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 09:58:37.542850  585980 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 09:58:37.542986  585980 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 09:58:37.543210  585980 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 09:58:37.543345  585980 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 09:58:37.543455  585980 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 09:58:37.543475  585980 kubeadm.go:319] 
	I1115 09:58:37.543575  585980 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 09:58:37.543587  585980 kubeadm.go:319] 
	I1115 09:58:37.543679  585980 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 09:58:37.543692  585980 kubeadm.go:319] 
	I1115 09:58:37.543723  585980 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 09:58:37.543799  585980 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 09:58:37.543871  585980 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 09:58:37.543880  585980 kubeadm.go:319] 
	I1115 09:58:37.543949  585980 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 09:58:37.543959  585980 kubeadm.go:319] 
	I1115 09:58:37.544017  585980 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 09:58:37.544026  585980 kubeadm.go:319] 
	I1115 09:58:37.544093  585980 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 09:58:37.544193  585980 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 09:58:37.544287  585980 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 09:58:37.544297  585980 kubeadm.go:319] 
	I1115 09:58:37.544420  585980 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 09:58:37.544527  585980 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 09:58:37.544541  585980 kubeadm.go:319] 
	I1115 09:58:37.544656  585980 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token olz1a2.naoibbsbc9ube8ph \
	I1115 09:58:37.544787  585980 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac \
	I1115 09:58:37.544816  585980 kubeadm.go:319] 	--control-plane 
	I1115 09:58:37.544822  585980 kubeadm.go:319] 
	I1115 09:58:37.544931  585980 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 09:58:37.544938  585980 kubeadm.go:319] 
	I1115 09:58:37.545035  585980 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token olz1a2.naoibbsbc9ube8ph \
	I1115 09:58:37.545196  585980 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac 
	I1115 09:58:37.545208  585980 cni.go:84] Creating CNI manager for ""
	I1115 09:58:37.545217  585980 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:58:37.549811  585980 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 09:58:37.551294  585980 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 09:58:37.558028  585980 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1115 09:58:37.558051  585980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 09:58:37.578448  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 09:58:35.109015  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:58:35.109592  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:58:35.109649  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:58:35.109704  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:58:35.139619  539051 cri.go:89] found id: "6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:35.139647  539051 cri.go:89] found id: ""
	I1115 09:58:35.139657  539051 logs.go:282] 1 containers: [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83]
	I1115 09:58:35.139730  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:35.144017  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:58:35.144089  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:58:35.175873  539051 cri.go:89] found id: ""
	I1115 09:58:35.175901  539051 logs.go:282] 0 containers: []
	W1115 09:58:35.175913  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:58:35.175922  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:58:35.175978  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:58:35.205507  539051 cri.go:89] found id: ""
	I1115 09:58:35.205534  539051 logs.go:282] 0 containers: []
	W1115 09:58:35.205542  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:58:35.205548  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:58:35.205610  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:58:35.237232  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:35.237259  539051 cri.go:89] found id: ""
	I1115 09:58:35.237271  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:58:35.237342  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:35.241823  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:58:35.241896  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:58:35.276658  539051 cri.go:89] found id: ""
	I1115 09:58:35.276689  539051 logs.go:282] 0 containers: []
	W1115 09:58:35.276700  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:58:35.276708  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:58:35.276775  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:58:35.309930  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:35.309956  539051 cri.go:89] found id: ""
	I1115 09:58:35.309965  539051 logs.go:282] 1 containers: [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:58:35.310025  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:35.314615  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:58:35.314699  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:58:35.349847  539051 cri.go:89] found id: ""
	I1115 09:58:35.349878  539051 logs.go:282] 0 containers: []
	W1115 09:58:35.349889  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:58:35.349902  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:58:35.349963  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:58:35.383056  539051 cri.go:89] found id: ""
	I1115 09:58:35.383084  539051 logs.go:282] 0 containers: []
	W1115 09:58:35.383095  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:58:35.383109  539051 logs.go:123] Gathering logs for kube-apiserver [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83] ...
	I1115 09:58:35.383128  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:35.425480  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:58:35.425577  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:35.487718  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:58:35.487762  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:35.521375  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:58:35.521418  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:58:35.577108  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:58:35.577154  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:58:35.613515  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:58:35.613554  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:58:35.720375  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:58:35.720427  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:58:35.743158  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:58:35.743198  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:58:35.813158  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:58:38.314463  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:58:38.314952  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:58:38.315016  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:58:38.315133  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:58:38.355363  539051 cri.go:89] found id: "6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:38.355386  539051 cri.go:89] found id: ""
	I1115 09:58:38.355419  539051 logs.go:282] 1 containers: [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83]
	I1115 09:58:38.355478  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:38.360324  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:58:38.360380  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:58:38.398537  539051 cri.go:89] found id: ""
	I1115 09:58:38.398563  539051 logs.go:282] 0 containers: []
	W1115 09:58:38.398573  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:58:38.398581  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:58:38.398646  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:58:38.438525  539051 cri.go:89] found id: ""
	I1115 09:58:38.438564  539051 logs.go:282] 0 containers: []
	W1115 09:58:38.438576  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:58:38.438584  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:58:38.438642  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:58:38.475177  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:38.475204  539051 cri.go:89] found id: ""
	I1115 09:58:38.475215  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:58:38.475282  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:38.480904  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:58:38.480989  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:58:38.518293  539051 cri.go:89] found id: ""
	I1115 09:58:38.518326  539051 logs.go:282] 0 containers: []
	W1115 09:58:38.518336  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:58:38.518343  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:58:38.518412  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:58:38.552194  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:38.552217  539051 cri.go:89] found id: ""
	I1115 09:58:38.552226  539051 logs.go:282] 1 containers: [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:58:38.552280  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:38.557164  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:58:38.557243  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:58:38.589885  539051 cri.go:89] found id: ""
	I1115 09:58:38.589913  539051 logs.go:282] 0 containers: []
	W1115 09:58:38.589925  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:58:38.589934  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:58:38.590002  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:58:38.625434  539051 cri.go:89] found id: ""
	I1115 09:58:38.625466  539051 logs.go:282] 0 containers: []
	W1115 09:58:38.625478  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:58:38.625491  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:58:38.625504  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:58:38.686872  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:58:38.686910  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:58:38.722145  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:58:38.722182  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:58:38.853786  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:58:38.853821  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:58:38.874289  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:58:38.874329  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:58:38.951416  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:58:38.951444  539051 logs.go:123] Gathering logs for kube-apiserver [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83] ...
	I1115 09:58:38.951463  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:38.991144  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:58:38.991180  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:39.051244  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:58:39.051286  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:37.170195  589862 cli_runner.go:164] Run: docker network inspect no-preload-559401 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 09:58:37.194922  589862 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 09:58:37.199610  589862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:58:37.210098  589862 kubeadm.go:884] updating cluster {Name:no-preload-559401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-559401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 09:58:37.210240  589862 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:58:37.210289  589862 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 09:58:37.237265  589862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1115 09:58:37.237294  589862 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1115 09:58:37.237347  589862 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:37.237373  589862 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:37.237427  589862 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:37.237436  589862 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1115 09:58:37.237438  589862 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:37.237477  589862 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:37.237495  589862 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:37.237400  589862 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:37.238654  589862 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:37.238755  589862 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:37.238790  589862 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:37.238790  589862 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:37.238654  589862 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:37.238790  589862 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:37.238834  589862 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:37.238852  589862 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1115 09:58:37.368858  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:37.388030  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:37.390652  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:37.402893  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1115 09:58:37.416483  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:37.423499  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:37.427234  589862 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1115 09:58:37.427278  589862 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:37.427328  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.445632  589862 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1115 09:58:37.445685  589862 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:37.445739  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.449162  589862 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1115 09:58:37.449205  589862 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:37.449251  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.456823  589862 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1115 09:58:37.456873  589862 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1115 09:58:37.456925  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.462211  589862 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1115 09:58:37.462252  589862 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:37.462296  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.467693  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:37.469716  589862 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1115 09:58:37.469751  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:37.469763  589862 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:37.469776  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:37.469813  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.469846  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:37.469873  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:37.469848  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1115 09:58:37.520598  589862 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1115 09:58:37.520644  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:37.520649  589862 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:37.520683  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:37.520925  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:37.520988  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1115 09:58:37.521005  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:37.521059  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:37.521156  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:37.559429  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:37.559478  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:37.562902  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1115 09:58:37.563009  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1115 09:58:37.563105  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1115 09:58:37.563211  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1115 09:58:37.567478  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1115 09:58:37.603335  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1115 09:58:37.603731  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:37.607984  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1115 09:58:37.608024  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1115 09:58:37.608097  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1115 09:58:37.608135  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1115 09:58:37.608164  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1115 09:58:37.608230  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1115 09:58:37.613074  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1115 09:58:37.613175  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1115 09:58:37.613376  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1115 09:58:37.613539  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1115 09:58:37.637537  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1115 09:58:37.637636  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1115 09:58:37.637659  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1115 09:58:37.637682  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1115 09:58:37.637708  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1115 09:58:37.637728  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1115 09:58:37.637748  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1115 09:58:37.637759  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1115 09:58:37.637776  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1115 09:58:37.637830  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1115 09:58:37.637849  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1115 09:58:37.637811  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1115 09:58:37.637881  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1115 09:58:37.645638  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1115 09:58:37.645672  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1115 09:58:37.686387  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1115 09:58:37.686509  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1115 09:58:37.708289  589862 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1115 09:58:37.708371  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1115 09:58:37.775568  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1115 09:58:37.775610  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1115 09:58:38.127056  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1115 09:58:38.127105  589862 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1115 09:58:38.127168  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1115 09:58:38.588766  589862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:39.393213  589862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.266010739s)
	I1115 09:58:39.393247  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1115 09:58:39.393274  589862 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1115 09:58:39.393312  589862 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1115 09:58:39.393346  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1115 09:58:39.393358  589862 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:39.393436  589862 ssh_runner.go:195] Run: which crictl
	I1115 09:58:39.397614  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:38.486205  585980 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 09:58:38.486290  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:38.486305  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-335655 minikube.k8s.io/updated_at=2025_11_15T09_58_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=old-k8s-version-335655 minikube.k8s.io/primary=true
	I1115 09:58:38.569942  585980 ops.go:34] apiserver oom_adj: -16
	I1115 09:58:38.570037  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:39.070852  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:39.570810  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:40.070706  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:40.570151  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:41.071113  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:41.570266  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:42.070123  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:42.570368  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:41.587462  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:58:40.679243  589862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.285866204s)
	I1115 09:58:40.679279  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1115 09:58:40.679275  589862 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.281627059s)
	I1115 09:58:40.679311  589862 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1115 09:58:40.679350  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:40.679370  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1115 09:58:40.706812  589862 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:42.076497  589862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.397096551s)
	I1115 09:58:42.076542  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1115 09:58:42.076567  589862 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1115 09:58:42.076510  589862 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.369660524s)
	I1115 09:58:42.076663  589862 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1115 09:58:42.076617  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1115 09:58:42.076760  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1115 09:58:43.672868  589862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.596125693s)
	I1115 09:58:43.672901  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1115 09:58:43.672866  589862 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.59607551s)
	I1115 09:58:43.672934  589862 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1115 09:58:43.672977  589862 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1115 09:58:43.672986  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1115 09:58:43.673004  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1115 09:58:44.853738  589862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.180718097s)
	I1115 09:58:44.853771  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1115 09:58:44.853801  589862 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1115 09:58:44.853841  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1115 09:58:43.070901  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:43.570489  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:44.070154  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:44.570975  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:45.070251  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:45.571098  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:46.070925  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:46.571003  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:47.070168  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:47.571032  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:46.589587  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1115 09:58:46.589659  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:58:46.589759  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:58:46.624264  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:58:46.624289  539051 cri.go:89] found id: "6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:46.624295  539051 cri.go:89] found id: ""
	I1115 09:58:46.624305  539051 logs.go:282] 2 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8 6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83]
	I1115 09:58:46.624364  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:46.629325  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:46.633650  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:58:46.633736  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:58:46.665182  539051 cri.go:89] found id: ""
	I1115 09:58:46.665205  539051 logs.go:282] 0 containers: []
	W1115 09:58:46.665213  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:58:46.665221  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:58:46.665270  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:58:46.696039  539051 cri.go:89] found id: ""
	I1115 09:58:46.696066  539051 logs.go:282] 0 containers: []
	W1115 09:58:46.696078  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:58:46.696087  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:58:46.696142  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:58:46.727651  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:46.727678  539051 cri.go:89] found id: ""
	I1115 09:58:46.727688  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:58:46.727747  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:46.732339  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:58:46.732425  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:58:46.761424  539051 cri.go:89] found id: ""
	I1115 09:58:46.761455  539051 logs.go:282] 0 containers: []
	W1115 09:58:46.761467  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:58:46.761475  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:58:46.761540  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:58:46.790988  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:46.791014  539051 cri.go:89] found id: ""
	I1115 09:58:46.791025  539051 logs.go:282] 1 containers: [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:58:46.791081  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:58:46.795775  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:58:46.795838  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:58:46.828078  539051 cri.go:89] found id: ""
	I1115 09:58:46.828105  539051 logs.go:282] 0 containers: []
	W1115 09:58:46.828115  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:58:46.828123  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:58:46.828188  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:58:46.858191  539051 cri.go:89] found id: ""
	I1115 09:58:46.858217  539051 logs.go:282] 0 containers: []
	W1115 09:58:46.858225  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:58:46.858240  539051 logs.go:123] Gathering logs for kube-apiserver [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8] ...
	I1115 09:58:46.858254  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:58:46.893709  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:58:46.893740  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:58:46.951755  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:58:46.951792  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:58:47.012185  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:58:47.012226  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:58:47.114132  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:58:47.114170  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:58:47.133687  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:58:47.133723  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1115 09:58:48.333733  589862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.479866558s)
	I1115 09:58:48.333763  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1115 09:58:48.333787  589862 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1115 09:58:48.333840  589862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1115 09:58:48.887418  589862 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21895-355485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1115 09:58:48.887465  589862 cache_images.go:125] Successfully loaded all cached images
	I1115 09:58:48.887471  589862 cache_images.go:94] duration metric: took 11.650162064s to LoadCachedImages
	I1115 09:58:48.887486  589862 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 09:58:48.887599  589862 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-559401 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-559401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 09:58:48.887681  589862 ssh_runner.go:195] Run: crio config
	I1115 09:58:48.935652  589862 cni.go:84] Creating CNI manager for ""
	I1115 09:58:48.935679  589862 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:58:48.935698  589862 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 09:58:48.935727  589862 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-559401 NodeName:no-preload-559401 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 09:58:48.935955  589862 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-559401"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 09:58:48.936036  589862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 09:58:48.944737  589862 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1115 09:58:48.944809  589862 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1115 09:58:48.953950  589862 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1115 09:58:48.954034  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1115 09:58:48.954060  589862 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1115 09:58:48.954089  589862 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1115 09:58:48.958596  589862 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1115 09:58:48.958631  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1115 09:58:48.070773  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:48.570466  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:49.070716  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:49.570991  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:50.071061  585980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:58:50.178640  585980 kubeadm.go:1114] duration metric: took 11.692414753s to wait for elevateKubeSystemPrivileges
	I1115 09:58:50.178690  585980 kubeadm.go:403] duration metric: took 21.402833585s to StartCluster
	I1115 09:58:50.178714  585980 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:50.178808  585980 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:58:50.180095  585980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:50.180339  585980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 09:58:50.180357  585980 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:58:50.180477  585980 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 09:58:50.180571  585980 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-335655"
	I1115 09:58:50.180592  585980 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-335655"
	I1115 09:58:50.180603  585980 config.go:182] Loaded profile config "old-k8s-version-335655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 09:58:50.180632  585980 host.go:66] Checking if "old-k8s-version-335655" exists ...
	I1115 09:58:50.180655  585980 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-335655"
	I1115 09:58:50.180672  585980 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-335655"
	I1115 09:58:50.181063  585980 cli_runner.go:164] Run: docker container inspect old-k8s-version-335655 --format={{.State.Status}}
	I1115 09:58:50.181276  585980 cli_runner.go:164] Run: docker container inspect old-k8s-version-335655 --format={{.State.Status}}
	I1115 09:58:50.184320  585980 out.go:179] * Verifying Kubernetes components...
	I1115 09:58:50.185582  585980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:58:50.208776  585980 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:58:50.209642  585980 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-335655"
	I1115 09:58:50.209728  585980 host.go:66] Checking if "old-k8s-version-335655" exists ...
	I1115 09:58:50.210281  585980 cli_runner.go:164] Run: docker container inspect old-k8s-version-335655 --format={{.State.Status}}
	I1115 09:58:50.211695  585980 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:58:50.211717  585980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 09:58:50.211770  585980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-335655
	I1115 09:58:50.243519  585980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/old-k8s-version-335655/id_rsa Username:docker}
	I1115 09:58:50.247576  585980 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 09:58:50.247608  585980 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 09:58:50.247676  585980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-335655
	I1115 09:58:50.275842  585980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/old-k8s-version-335655/id_rsa Username:docker}
	I1115 09:58:50.295150  585980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 09:58:50.350270  585980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:58:50.362178  585980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:58:50.392335  585980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 09:58:50.559073  585980 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1115 09:58:50.560112  585980 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-335655" to be "Ready" ...
	I1115 09:58:50.836571  585980 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 09:58:50.837873  585980 addons.go:515] duration metric: took 657.393104ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 09:58:51.063769  585980 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-335655" context rescaled to 1 replicas
	W1115 09:58:52.563949  585980 node_ready.go:57] node "old-k8s-version-335655" has "Ready":"False" status (will retry)
	I1115 09:58:49.900772  589862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:58:49.915081  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1115 09:58:49.920412  589862 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1115 09:58:49.920459  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1115 09:58:50.020568  589862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1115 09:58:50.027737  589862 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1115 09:58:50.027790  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1115 09:58:50.308923  589862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 09:58:50.320504  589862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 09:58:50.336119  589862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 09:58:50.354358  589862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1115 09:58:50.371220  589862 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 09:58:50.377002  589862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 09:58:50.390186  589862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:58:50.504769  589862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:58:50.535882  589862 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401 for IP: 192.168.103.2
	I1115 09:58:50.535903  589862 certs.go:195] generating shared ca certs ...
	I1115 09:58:50.535924  589862 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:50.536096  589862 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 09:58:50.536319  589862 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 09:58:50.536379  589862 certs.go:257] generating profile certs ...
	I1115 09:58:50.536551  589862 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.key
	I1115 09:58:50.536611  589862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.crt with IP's: []
	I1115 09:58:50.654774  589862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.crt ...
	I1115 09:58:50.654816  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.crt: {Name:mkf7eb6dd7672898489471e2954de98923605286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:50.655021  589862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.key ...
	I1115 09:58:50.655040  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.key: {Name:mke4b476571efd801c87de00dd4f3d2a6f4ddbbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:50.655161  589862 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.key.f25eab8b
	I1115 09:58:50.655183  589862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.crt.f25eab8b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1115 09:58:50.980637  589862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.crt.f25eab8b ...
	I1115 09:58:50.980669  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.crt.f25eab8b: {Name:mk4986c594ab003033b784ceacd55ced33e1763e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:50.980844  589862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.key.f25eab8b ...
	I1115 09:58:50.980872  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.key.f25eab8b: {Name:mka6cc1e0399e53e8bf66b9c9957ff5fd5d16d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:50.981003  589862 certs.go:382] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.crt.f25eab8b -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.crt
	I1115 09:58:50.981104  589862 certs.go:386] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.key.f25eab8b -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.key
	I1115 09:58:50.981196  589862 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.key
	I1115 09:58:50.981220  589862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.crt with IP's: []
	I1115 09:58:51.486385  589862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.crt ...
	I1115 09:58:51.486426  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.crt: {Name:mk86548d2fb9cffa7c9e24d245dabba7628d775d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:51.486620  589862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.key ...
	I1115 09:58:51.486644  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.key: {Name:mkb62ec7546a0f0eb8a891ecd6f3d1c152e38f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:58:51.486849  589862 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 09:58:51.486896  589862 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 09:58:51.486924  589862 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 09:58:51.486959  589862 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 09:58:51.486999  589862 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 09:58:51.487036  589862 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 09:58:51.487099  589862 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 09:58:51.487717  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 09:58:51.505824  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 09:58:51.524629  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 09:58:51.542831  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 09:58:51.561961  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 09:58:51.580560  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 09:58:51.598695  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 09:58:51.617058  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 09:58:51.634459  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 09:58:51.655672  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 09:58:51.674543  589862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 09:58:51.693317  589862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 09:58:51.706898  589862 ssh_runner.go:195] Run: openssl version
	I1115 09:58:51.713710  589862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 09:58:51.723372  589862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 09:58:51.727726  589862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 09:58:51.727794  589862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 09:58:51.778507  589862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 09:58:51.791304  589862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 09:58:51.804320  589862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:58:51.810297  589862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:58:51.810365  589862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 09:58:51.872433  589862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 09:58:51.885717  589862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 09:58:51.899013  589862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 09:58:51.904365  589862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 09:58:51.904462  589862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 09:58:51.962993  589862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 09:58:51.975533  589862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 09:58:51.980732  589862 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 09:58:51.980806  589862 kubeadm.go:401] StartCluster: {Name:no-preload-559401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-559401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:58:51.980917  589862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 09:58:51.981003  589862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 09:58:52.018488  589862 cri.go:89] found id: ""
	I1115 09:58:52.018652  589862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 09:58:52.030847  589862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 09:58:52.042691  589862 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 09:58:52.042765  589862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 09:58:52.053828  589862 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 09:58:52.053855  589862 kubeadm.go:158] found existing configuration files:
	
	I1115 09:58:52.053907  589862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 09:58:52.065475  589862 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 09:58:52.065547  589862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 09:58:52.075773  589862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 09:58:52.086227  589862 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 09:58:52.086291  589862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 09:58:52.096762  589862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 09:58:52.107754  589862 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 09:58:52.107818  589862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 09:58:52.118641  589862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 09:58:52.129548  589862 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 09:58:52.129612  589862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 09:58:52.140206  589862 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 09:58:52.216980  589862 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 09:58:52.295837  589862 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1115 09:58:55.064556  585980 node_ready.go:57] node "old-k8s-version-335655" has "Ready":"False" status (will retry)
	W1115 09:58:57.564147  585980 node_ready.go:57] node "old-k8s-version-335655" has "Ready":"False" status (will retry)
	I1115 09:58:57.201317  539051 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.067565178s)
	W1115 09:58:57.201371  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1115 09:58:57.201388  539051 logs.go:123] Gathering logs for kube-apiserver [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83] ...
	I1115 09:58:57.201426  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:58:57.246162  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:58:57.246208  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:58:57.279745  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:58:57.279789  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:59:01.843237  589862 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 09:59:01.843317  589862 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 09:59:01.843408  589862 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 09:59:01.843482  589862 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 09:59:01.843531  589862 kubeadm.go:319] OS: Linux
	I1115 09:59:01.843604  589862 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 09:59:01.843722  589862 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 09:59:01.843805  589862 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 09:59:01.843883  589862 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 09:59:01.843950  589862 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 09:59:01.844027  589862 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 09:59:01.844100  589862 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 09:59:01.844195  589862 kubeadm.go:319] CGROUPS_IO: enabled
	I1115 09:59:01.844304  589862 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 09:59:01.844451  589862 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 09:59:01.844630  589862 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 09:59:01.844724  589862 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 09:59:01.846951  589862 out.go:252]   - Generating certificates and keys ...
	I1115 09:59:01.847036  589862 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 09:59:01.847125  589862 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 09:59:01.847240  589862 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 09:59:01.847322  589862 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 09:59:01.847442  589862 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 09:59:01.847517  589862 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 09:59:01.847594  589862 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 09:59:01.847786  589862 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-559401] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1115 09:59:01.847867  589862 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 09:59:01.848053  589862 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-559401] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1115 09:59:01.848149  589862 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 09:59:01.848252  589862 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 09:59:01.848314  589862 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 09:59:01.848413  589862 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 09:59:01.848473  589862 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 09:59:01.848539  589862 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 09:59:01.848611  589862 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 09:59:01.848701  589862 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 09:59:01.848796  589862 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 09:59:01.848974  589862 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 09:59:01.849089  589862 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 09:59:01.850586  589862 out.go:252]   - Booting up control plane ...
	I1115 09:59:01.850699  589862 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 09:59:01.850837  589862 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 09:59:01.850930  589862 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 09:59:01.851112  589862 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 09:59:01.851252  589862 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 09:59:01.851424  589862 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 09:59:01.851564  589862 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 09:59:01.851602  589862 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 09:59:01.851716  589862 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 09:59:01.851834  589862 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 09:59:01.851926  589862 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001771287s
	I1115 09:59:01.852053  589862 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 09:59:01.852175  589862 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1115 09:59:01.852310  589862 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 09:59:01.852463  589862 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 09:59:01.852600  589862 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.284780336s
	I1115 09:59:01.852697  589862 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.748187822s
	I1115 09:59:01.852792  589862 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002085645s
	I1115 09:59:01.852910  589862 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 09:59:01.853059  589862 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 09:59:01.853112  589862 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 09:59:01.853348  589862 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-559401 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 09:59:01.853460  589862 kubeadm.go:319] [bootstrap-token] Using token: 9z2agn.qs0z4ulg6bsyvbug
	I1115 09:59:01.855000  589862 out.go:252]   - Configuring RBAC rules ...
	I1115 09:59:01.855162  589862 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 09:59:01.855277  589862 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 09:59:01.855452  589862 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 09:59:01.855626  589862 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 09:59:01.855787  589862 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 09:59:01.855903  589862 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 09:59:01.856036  589862 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 09:59:01.856117  589862 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 09:59:01.856198  589862 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 09:59:01.856211  589862 kubeadm.go:319] 
	I1115 09:59:01.856290  589862 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 09:59:01.856307  589862 kubeadm.go:319] 
	I1115 09:59:01.856422  589862 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 09:59:01.856432  589862 kubeadm.go:319] 
	I1115 09:59:01.856461  589862 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 09:59:01.856541  589862 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 09:59:01.856592  589862 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 09:59:01.856598  589862 kubeadm.go:319] 
	I1115 09:59:01.856640  589862 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 09:59:01.856645  589862 kubeadm.go:319] 
	I1115 09:59:01.856689  589862 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 09:59:01.856699  589862 kubeadm.go:319] 
	I1115 09:59:01.856756  589862 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 09:59:01.856863  589862 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 09:59:01.856942  589862 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 09:59:01.856952  589862 kubeadm.go:319] 
	I1115 09:59:01.857054  589862 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 09:59:01.857119  589862 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 09:59:01.857125  589862 kubeadm.go:319] 
	I1115 09:59:01.857232  589862 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9z2agn.qs0z4ulg6bsyvbug \
	I1115 09:59:01.857416  589862 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac \
	I1115 09:59:01.857452  589862 kubeadm.go:319] 	--control-plane 
	I1115 09:59:01.857461  589862 kubeadm.go:319] 
	I1115 09:59:01.857619  589862 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 09:59:01.857635  589862 kubeadm.go:319] 
	I1115 09:59:01.857736  589862 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9z2agn.qs0z4ulg6bsyvbug \
	I1115 09:59:01.857898  589862 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac 
	I1115 09:59:01.857918  589862 cni.go:84] Creating CNI manager for ""
	I1115 09:59:01.857931  589862 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:59:01.860665  589862 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1115 09:59:00.064033  585980 node_ready.go:57] node "old-k8s-version-335655" has "Ready":"False" status (will retry)
	W1115 09:59:02.563741  585980 node_ready.go:57] node "old-k8s-version-335655" has "Ready":"False" status (will retry)
	I1115 09:58:59.819942  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:59:01.498994  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:35380->192.168.76.2:8443: read: connection reset by peer
	I1115 09:59:01.499072  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:59:01.499138  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:59:01.529135  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:01.529161  539051 cri.go:89] found id: "6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:59:01.529165  539051 cri.go:89] found id: ""
	I1115 09:59:01.529173  539051 logs.go:282] 2 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8 6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83]
	I1115 09:59:01.529237  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:01.533524  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:01.537428  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:59:01.537497  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:59:01.565354  539051 cri.go:89] found id: ""
	I1115 09:59:01.565381  539051 logs.go:282] 0 containers: []
	W1115 09:59:01.565423  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:59:01.565433  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:59:01.565496  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:59:01.593051  539051 cri.go:89] found id: ""
	I1115 09:59:01.593080  539051 logs.go:282] 0 containers: []
	W1115 09:59:01.593090  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:59:01.593098  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:59:01.593159  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:59:01.621510  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:01.621539  539051 cri.go:89] found id: ""
	I1115 09:59:01.621550  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:59:01.621600  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:01.626015  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:59:01.626087  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:59:01.655546  539051 cri.go:89] found id: ""
	I1115 09:59:01.655571  539051 logs.go:282] 0 containers: []
	W1115 09:59:01.655579  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:59:01.655586  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:59:01.655641  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:59:01.683273  539051 cri.go:89] found id: "7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:01.683294  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:01.683298  539051 cri.go:89] found id: ""
	I1115 09:59:01.683305  539051 logs.go:282] 2 containers: [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:59:01.683360  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:01.687943  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:01.691770  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:59:01.691837  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:59:01.719169  539051 cri.go:89] found id: ""
	I1115 09:59:01.719198  539051 logs.go:282] 0 containers: []
	W1115 09:59:01.719209  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:59:01.719215  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:59:01.719280  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:59:01.747446  539051 cri.go:89] found id: ""
	I1115 09:59:01.747479  539051 logs.go:282] 0 containers: []
	W1115 09:59:01.747491  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:59:01.747511  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:59:01.747526  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:59:01.765002  539051 logs.go:123] Gathering logs for kube-apiserver [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8] ...
	I1115 09:59:01.765036  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:01.799479  539051 logs.go:123] Gathering logs for kube-apiserver [6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83] ...
	I1115 09:59:01.799508  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6e086ea83d047a26a9c4e25f1da39e5d50b773039bd86362da97f4a25abe4a83"
	I1115 09:59:01.832627  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:59:01.832666  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:01.892833  539051 logs.go:123] Gathering logs for kube-controller-manager [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe] ...
	I1115 09:59:01.892869  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:01.923960  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:59:01.924003  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:59:01.984171  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:59:01.984204  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:59:02.053287  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:59:02.053316  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:59:02.053339  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:02.085806  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:59:02.085843  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:59:02.123372  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:59:02.123413  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:59:01.861939  589862 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 09:59:01.867990  589862 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 09:59:01.868013  589862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 09:59:01.883408  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 09:59:02.123350  589862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 09:59:02.123545  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:02.123674  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-559401 minikube.k8s.io/updated_at=2025_11_15T09_59_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=no-preload-559401 minikube.k8s.io/primary=true
	I1115 09:59:02.139292  589862 ops.go:34] apiserver oom_adj: -16
	I1115 09:59:02.214537  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:02.715557  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:03.214907  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:03.715555  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:04.215364  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:04.715369  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:04.063246  585980 node_ready.go:49] node "old-k8s-version-335655" is "Ready"
	I1115 09:59:04.063283  585980 node_ready.go:38] duration metric: took 13.503131624s for node "old-k8s-version-335655" to be "Ready" ...
	I1115 09:59:04.063325  585980 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:59:04.063384  585980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:59:04.076227  585980 api_server.go:72] duration metric: took 13.895716827s to wait for apiserver process to appear ...
	I1115 09:59:04.076261  585980 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:59:04.076287  585980 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1115 09:59:04.080488  585980 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1115 09:59:04.081698  585980 api_server.go:141] control plane version: v1.28.0
	I1115 09:59:04.081725  585980 api_server.go:131] duration metric: took 5.455488ms to wait for apiserver health ...
	I1115 09:59:04.081735  585980 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:59:04.086869  585980 system_pods.go:59] 8 kube-system pods found
	I1115 09:59:04.087029  585980 system_pods.go:61] "coredns-5dd5756b68-j8hqh" [e2853043-8da1-44cd-b87b-51cecce5b801] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:59:04.087376  585980 system_pods.go:61] "etcd-old-k8s-version-335655" [c169972f-a50f-420a-9c9f-da6a0847b99d] Running
	I1115 09:59:04.087433  585980 system_pods.go:61] "kindnet-w52sl" [44811fde-1c17-472e-9aa0-ffb839e2e4d2] Running
	I1115 09:59:04.087442  585980 system_pods.go:61] "kube-apiserver-old-k8s-version-335655" [afa65d8c-6f22-4303-aee4-c3c9b5775628] Running
	I1115 09:59:04.087447  585980 system_pods.go:61] "kube-controller-manager-old-k8s-version-335655" [d4de6043-e48a-4c33-a74d-fcf9caf6f324] Running
	I1115 09:59:04.087452  585980 system_pods.go:61] "kube-proxy-ndp6f" [771705b2-6cee-4952-b8b8-c3a6a4d8a4c7] Running
	I1115 09:59:04.087457  585980 system_pods.go:61] "kube-scheduler-old-k8s-version-335655" [4e430d3c-91c8-4730-94f4-1b811fed2ee1] Running
	I1115 09:59:04.087467  585980 system_pods.go:61] "storage-provisioner" [af2a330d-a530-455d-a428-c27df3d4ff47] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:59:04.087477  585980 system_pods.go:74] duration metric: took 5.733703ms to wait for pod list to return data ...
	I1115 09:59:04.087494  585980 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:59:04.090133  585980 default_sa.go:45] found service account: "default"
	I1115 09:59:04.090160  585980 default_sa.go:55] duration metric: took 2.6594ms for default service account to be created ...
	I1115 09:59:04.090173  585980 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:59:04.093482  585980 system_pods.go:86] 8 kube-system pods found
	I1115 09:59:04.093513  585980 system_pods.go:89] "coredns-5dd5756b68-j8hqh" [e2853043-8da1-44cd-b87b-51cecce5b801] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:59:04.093522  585980 system_pods.go:89] "etcd-old-k8s-version-335655" [c169972f-a50f-420a-9c9f-da6a0847b99d] Running
	I1115 09:59:04.093532  585980 system_pods.go:89] "kindnet-w52sl" [44811fde-1c17-472e-9aa0-ffb839e2e4d2] Running
	I1115 09:59:04.093539  585980 system_pods.go:89] "kube-apiserver-old-k8s-version-335655" [afa65d8c-6f22-4303-aee4-c3c9b5775628] Running
	I1115 09:59:04.093550  585980 system_pods.go:89] "kube-controller-manager-old-k8s-version-335655" [d4de6043-e48a-4c33-a74d-fcf9caf6f324] Running
	I1115 09:59:04.093556  585980 system_pods.go:89] "kube-proxy-ndp6f" [771705b2-6cee-4952-b8b8-c3a6a4d8a4c7] Running
	I1115 09:59:04.093561  585980 system_pods.go:89] "kube-scheduler-old-k8s-version-335655" [4e430d3c-91c8-4730-94f4-1b811fed2ee1] Running
	I1115 09:59:04.093570  585980 system_pods.go:89] "storage-provisioner" [af2a330d-a530-455d-a428-c27df3d4ff47] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:59:04.093601  585980 retry.go:31] will retry after 288.966141ms: missing components: kube-dns
	I1115 09:59:04.386773  585980 system_pods.go:86] 8 kube-system pods found
	I1115 09:59:04.386811  585980 system_pods.go:89] "coredns-5dd5756b68-j8hqh" [e2853043-8da1-44cd-b87b-51cecce5b801] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:59:04.386821  585980 system_pods.go:89] "etcd-old-k8s-version-335655" [c169972f-a50f-420a-9c9f-da6a0847b99d] Running
	I1115 09:59:04.386829  585980 system_pods.go:89] "kindnet-w52sl" [44811fde-1c17-472e-9aa0-ffb839e2e4d2] Running
	I1115 09:59:04.386835  585980 system_pods.go:89] "kube-apiserver-old-k8s-version-335655" [afa65d8c-6f22-4303-aee4-c3c9b5775628] Running
	I1115 09:59:04.386840  585980 system_pods.go:89] "kube-controller-manager-old-k8s-version-335655" [d4de6043-e48a-4c33-a74d-fcf9caf6f324] Running
	I1115 09:59:04.386844  585980 system_pods.go:89] "kube-proxy-ndp6f" [771705b2-6cee-4952-b8b8-c3a6a4d8a4c7] Running
	I1115 09:59:04.386850  585980 system_pods.go:89] "kube-scheduler-old-k8s-version-335655" [4e430d3c-91c8-4730-94f4-1b811fed2ee1] Running
	I1115 09:59:04.386857  585980 system_pods.go:89] "storage-provisioner" [af2a330d-a530-455d-a428-c27df3d4ff47] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:59:04.386882  585980 retry.go:31] will retry after 253.204914ms: missing components: kube-dns
	I1115 09:59:04.644142  585980 system_pods.go:86] 8 kube-system pods found
	I1115 09:59:04.644181  585980 system_pods.go:89] "coredns-5dd5756b68-j8hqh" [e2853043-8da1-44cd-b87b-51cecce5b801] Running
	I1115 09:59:04.644191  585980 system_pods.go:89] "etcd-old-k8s-version-335655" [c169972f-a50f-420a-9c9f-da6a0847b99d] Running
	I1115 09:59:04.644197  585980 system_pods.go:89] "kindnet-w52sl" [44811fde-1c17-472e-9aa0-ffb839e2e4d2] Running
	I1115 09:59:04.644203  585980 system_pods.go:89] "kube-apiserver-old-k8s-version-335655" [afa65d8c-6f22-4303-aee4-c3c9b5775628] Running
	I1115 09:59:04.644209  585980 system_pods.go:89] "kube-controller-manager-old-k8s-version-335655" [d4de6043-e48a-4c33-a74d-fcf9caf6f324] Running
	I1115 09:59:04.644214  585980 system_pods.go:89] "kube-proxy-ndp6f" [771705b2-6cee-4952-b8b8-c3a6a4d8a4c7] Running
	I1115 09:59:04.644218  585980 system_pods.go:89] "kube-scheduler-old-k8s-version-335655" [4e430d3c-91c8-4730-94f4-1b811fed2ee1] Running
	I1115 09:59:04.644224  585980 system_pods.go:89] "storage-provisioner" [af2a330d-a530-455d-a428-c27df3d4ff47] Running
	I1115 09:59:04.644234  585980 system_pods.go:126] duration metric: took 554.05419ms to wait for k8s-apps to be running ...
	I1115 09:59:04.644249  585980 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:59:04.644310  585980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:59:04.658747  585980 system_svc.go:56] duration metric: took 14.468086ms WaitForService to wait for kubelet
	I1115 09:59:04.658785  585980 kubeadm.go:587] duration metric: took 14.478283435s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:59:04.658805  585980 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:59:04.661765  585980 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:59:04.661793  585980 node_conditions.go:123] node cpu capacity is 8
	I1115 09:59:04.661807  585980 node_conditions.go:105] duration metric: took 2.998154ms to run NodePressure ...
	I1115 09:59:04.661819  585980 start.go:242] waiting for startup goroutines ...
	I1115 09:59:04.661825  585980 start.go:247] waiting for cluster config update ...
	I1115 09:59:04.661835  585980 start.go:256] writing updated cluster config ...
	I1115 09:59:04.662094  585980 ssh_runner.go:195] Run: rm -f paused
	I1115 09:59:04.666110  585980 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:59:04.670626  585980 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-j8hqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:04.675683  585980 pod_ready.go:94] pod "coredns-5dd5756b68-j8hqh" is "Ready"
	I1115 09:59:04.675708  585980 pod_ready.go:86] duration metric: took 5.059433ms for pod "coredns-5dd5756b68-j8hqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:04.678537  585980 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:04.683119  585980 pod_ready.go:94] pod "etcd-old-k8s-version-335655" is "Ready"
	I1115 09:59:04.683143  585980 pod_ready.go:86] duration metric: took 4.578145ms for pod "etcd-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:04.686098  585980 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:04.691448  585980 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-335655" is "Ready"
	I1115 09:59:04.691479  585980 pod_ready.go:86] duration metric: took 5.351046ms for pod "kube-apiserver-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:04.696203  585980 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:05.070297  585980 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-335655" is "Ready"
	I1115 09:59:05.070323  585980 pod_ready.go:86] duration metric: took 374.090068ms for pod "kube-controller-manager-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:05.271587  585980 pod_ready.go:83] waiting for pod "kube-proxy-ndp6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:05.671223  585980 pod_ready.go:94] pod "kube-proxy-ndp6f" is "Ready"
	I1115 09:59:05.671254  585980 pod_ready.go:86] duration metric: took 399.63459ms for pod "kube-proxy-ndp6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:05.871120  585980 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:06.270375  585980 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-335655" is "Ready"
	I1115 09:59:06.270414  585980 pod_ready.go:86] duration metric: took 399.263099ms for pod "kube-scheduler-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:06.270430  585980 pod_ready.go:40] duration metric: took 1.604271623s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:59:06.319133  585980 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1115 09:59:06.321125  585980 out.go:203] 
	W1115 09:59:06.322520  585980 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1115 09:59:06.323807  585980 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1115 09:59:06.325357  585980 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-335655" cluster and "default" namespace by default
	I1115 09:59:05.214672  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:05.714585  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:06.214843  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:06.714731  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:07.214680  589862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 09:59:07.282745  589862 kubeadm.go:1114] duration metric: took 5.159254951s to wait for elevateKubeSystemPrivileges
	I1115 09:59:07.282789  589862 kubeadm.go:403] duration metric: took 15.301991399s to StartCluster
	I1115 09:59:07.282812  589862 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:59:07.282897  589862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:59:07.284224  589862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 09:59:07.284497  589862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 09:59:07.284513  589862 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 09:59:07.284596  589862 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 09:59:07.284708  589862 config.go:182] Loaded profile config "no-preload-559401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:59:07.284714  589862 addons.go:70] Setting storage-provisioner=true in profile "no-preload-559401"
	I1115 09:59:07.284732  589862 addons.go:70] Setting default-storageclass=true in profile "no-preload-559401"
	I1115 09:59:07.284755  589862 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-559401"
	I1115 09:59:07.284736  589862 addons.go:239] Setting addon storage-provisioner=true in "no-preload-559401"
	I1115 09:59:07.284849  589862 host.go:66] Checking if "no-preload-559401" exists ...
	I1115 09:59:07.285171  589862 cli_runner.go:164] Run: docker container inspect no-preload-559401 --format={{.State.Status}}
	I1115 09:59:07.285363  589862 cli_runner.go:164] Run: docker container inspect no-preload-559401 --format={{.State.Status}}
	I1115 09:59:07.286334  589862 out.go:179] * Verifying Kubernetes components...
	I1115 09:59:07.287959  589862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 09:59:07.310269  589862 addons.go:239] Setting addon default-storageclass=true in "no-preload-559401"
	I1115 09:59:07.310306  589862 host.go:66] Checking if "no-preload-559401" exists ...
	I1115 09:59:07.310666  589862 cli_runner.go:164] Run: docker container inspect no-preload-559401 --format={{.State.Status}}
	I1115 09:59:07.312231  589862 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 09:59:07.314067  589862 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:59:07.314089  589862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 09:59:07.314150  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:59:07.347314  589862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa Username:docker}
	I1115 09:59:07.348931  589862 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 09:59:07.349025  589862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 09:59:07.349096  589862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 09:59:07.377854  589862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa Username:docker}
	I1115 09:59:07.396020  589862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 09:59:07.440608  589862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 09:59:07.469330  589862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 09:59:07.492839  589862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 09:59:07.562143  589862 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1115 09:59:07.563514  589862 node_ready.go:35] waiting up to 6m0s for node "no-preload-559401" to be "Ready" ...
	I1115 09:59:07.788210  589862 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 09:59:04.748782  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:59:04.749213  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:59:04.749269  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:59:04.749319  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:59:04.779723  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:04.779750  539051 cri.go:89] found id: ""
	I1115 09:59:04.779761  539051 logs.go:282] 1 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8]
	I1115 09:59:04.779829  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:04.784483  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:59:04.784558  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:59:04.814230  539051 cri.go:89] found id: ""
	I1115 09:59:04.814253  539051 logs.go:282] 0 containers: []
	W1115 09:59:04.814261  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:59:04.814267  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:59:04.814320  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:59:04.842414  539051 cri.go:89] found id: ""
	I1115 09:59:04.842444  539051 logs.go:282] 0 containers: []
	W1115 09:59:04.842452  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:59:04.842459  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:59:04.842520  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:59:04.871888  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:04.871909  539051 cri.go:89] found id: ""
	I1115 09:59:04.871917  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:59:04.871966  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:04.876258  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:59:04.876324  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:59:04.904787  539051 cri.go:89] found id: ""
	I1115 09:59:04.904809  539051 logs.go:282] 0 containers: []
	W1115 09:59:04.904817  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:59:04.904825  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:59:04.904886  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:59:04.933869  539051 cri.go:89] found id: "7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:04.933892  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:04.933898  539051 cri.go:89] found id: ""
	I1115 09:59:04.933907  539051 logs.go:282] 2 containers: [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:59:04.933968  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:04.938118  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:04.941861  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:59:04.941931  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:59:04.970227  539051 cri.go:89] found id: ""
	I1115 09:59:04.970260  539051 logs.go:282] 0 containers: []
	W1115 09:59:04.970271  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:59:04.970278  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:59:04.970331  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:59:04.997732  539051 cri.go:89] found id: ""
	I1115 09:59:04.997757  539051 logs.go:282] 0 containers: []
	W1115 09:59:04.997764  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:59:04.997788  539051 logs.go:123] Gathering logs for kube-apiserver [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8] ...
	I1115 09:59:04.997803  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:05.032018  539051 logs.go:123] Gathering logs for kube-controller-manager [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe] ...
	I1115 09:59:05.032052  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:05.060554  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:59:05.060584  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:05.088865  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:59:05.088895  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:59:05.178592  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:59:05.178628  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:59:05.195670  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:59:05.195699  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:59:05.258518  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:59:05.258542  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:59:05.258557  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:05.312985  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:59:05.313020  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:59:05.365977  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:59:05.366017  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:59:07.898786  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:59:07.899246  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:59:07.899298  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:59:07.899345  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:59:07.929383  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:07.929422  539051 cri.go:89] found id: ""
	I1115 09:59:07.929433  539051 logs.go:282] 1 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8]
	I1115 09:59:07.929489  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:07.933944  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:59:07.934015  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:59:07.965700  539051 cri.go:89] found id: ""
	I1115 09:59:07.965730  539051 logs.go:282] 0 containers: []
	W1115 09:59:07.965743  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:59:07.965750  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:59:07.965809  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:59:07.994470  539051 cri.go:89] found id: ""
	I1115 09:59:07.994499  539051 logs.go:282] 0 containers: []
	W1115 09:59:07.994509  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:59:07.994519  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:59:07.994578  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:59:08.021550  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:08.021583  539051 cri.go:89] found id: ""
	I1115 09:59:08.021591  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:59:08.021640  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:08.025967  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:59:08.026027  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:59:08.053206  539051 cri.go:89] found id: ""
	I1115 09:59:08.053236  539051 logs.go:282] 0 containers: []
	W1115 09:59:08.053245  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:59:08.053252  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:59:08.053312  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:59:08.081560  539051 cri.go:89] found id: "7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:08.081593  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:08.081599  539051 cri.go:89] found id: ""
	I1115 09:59:08.081609  539051 logs.go:282] 2 containers: [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:59:08.081685  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:08.086335  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:08.090834  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:59:08.090917  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:59:08.120514  539051 cri.go:89] found id: ""
	I1115 09:59:08.120546  539051 logs.go:282] 0 containers: []
	W1115 09:59:08.120556  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:59:08.120566  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:59:08.120642  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:59:08.154644  539051 cri.go:89] found id: ""
	I1115 09:59:08.154671  539051 logs.go:282] 0 containers: []
	W1115 09:59:08.154681  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:59:08.154704  539051 logs.go:123] Gathering logs for kube-controller-manager [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe] ...
	I1115 09:59:08.154719  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:08.184135  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:59:08.184166  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:59:08.219072  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:59:08.219103  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:59:08.279466  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:59:08.279497  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:59:08.279516  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:08.335307  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:59:08.335352  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:08.364915  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:59:08.364949  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:59:08.416132  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:59:08.416183  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:59:08.513500  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:59:08.513538  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:59:08.531227  539051 logs.go:123] Gathering logs for kube-apiserver [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8] ...
	I1115 09:59:08.531256  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:07.789539  589862 addons.go:515] duration metric: took 504.952942ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 09:59:08.066524  589862 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-559401" context rescaled to 1 replicas
	W1115 09:59:09.566911  589862 node_ready.go:57] node "no-preload-559401" has "Ready":"False" status (will retry)
	I1115 09:59:11.067035  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:59:11.067548  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:59:11.067618  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:59:11.067681  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:59:11.095892  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:11.095920  539051 cri.go:89] found id: ""
	I1115 09:59:11.095930  539051 logs.go:282] 1 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8]
	I1115 09:59:11.095982  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:11.100256  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:59:11.100325  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:59:11.132951  539051 cri.go:89] found id: ""
	I1115 09:59:11.132988  539051 logs.go:282] 0 containers: []
	W1115 09:59:11.133000  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:59:11.133009  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:59:11.133075  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:59:11.162608  539051 cri.go:89] found id: ""
	I1115 09:59:11.162631  539051 logs.go:282] 0 containers: []
	W1115 09:59:11.162639  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:59:11.162646  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:59:11.162692  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:59:11.191185  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:11.191210  539051 cri.go:89] found id: ""
	I1115 09:59:11.191220  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:59:11.191283  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:11.195540  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:59:11.195604  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:59:11.223635  539051 cri.go:89] found id: ""
	I1115 09:59:11.223669  539051 logs.go:282] 0 containers: []
	W1115 09:59:11.223681  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:59:11.223689  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:59:11.223761  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:59:11.252109  539051 cri.go:89] found id: "7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:11.252136  539051 cri.go:89] found id: "46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:11.252142  539051 cri.go:89] found id: ""
	I1115 09:59:11.252152  539051 logs.go:282] 2 containers: [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1]
	I1115 09:59:11.252213  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:11.256490  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:11.260452  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:59:11.260517  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:59:11.288347  539051 cri.go:89] found id: ""
	I1115 09:59:11.288370  539051 logs.go:282] 0 containers: []
	W1115 09:59:11.288379  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:59:11.288386  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:59:11.288463  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:59:11.316840  539051 cri.go:89] found id: ""
	I1115 09:59:11.316872  539051 logs.go:282] 0 containers: []
	W1115 09:59:11.316888  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:59:11.316909  539051 logs.go:123] Gathering logs for kube-controller-manager [46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1] ...
	I1115 09:59:11.316926  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46160fbbe55364bab799169de5872be0c988fd86eaebbc18b4104951192d18b1"
	I1115 09:59:11.345259  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:59:11.345288  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:59:11.377138  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:59:11.377172  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:59:11.435249  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:59:11.435276  539051 logs.go:123] Gathering logs for kube-controller-manager [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe] ...
	I1115 09:59:11.435299  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:11.462991  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:59:11.463017  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:59:11.511643  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:59:11.511680  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:59:11.598975  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:59:11.599013  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:59:11.615855  539051 logs.go:123] Gathering logs for kube-apiserver [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8] ...
	I1115 09:59:11.615885  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:11.654559  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:59:11.654597  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:14.205487  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:59:14.205938  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:59:14.205990  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:59:14.206036  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:59:14.233815  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:14.233843  539051 cri.go:89] found id: ""
	I1115 09:59:14.233854  539051 logs.go:282] 1 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8]
	I1115 09:59:14.233914  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:14.238686  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:59:14.238762  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:59:14.266849  539051 cri.go:89] found id: ""
	I1115 09:59:14.266874  539051 logs.go:282] 0 containers: []
	W1115 09:59:14.266883  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:59:14.266895  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:59:14.266945  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:59:14.295140  539051 cri.go:89] found id: ""
	I1115 09:59:14.295173  539051 logs.go:282] 0 containers: []
	W1115 09:59:14.295185  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:59:14.295193  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:59:14.295259  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:59:14.323355  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:14.323375  539051 cri.go:89] found id: ""
	I1115 09:59:14.323383  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:59:14.323450  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:14.327639  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:59:14.327704  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:59:14.355633  539051 cri.go:89] found id: ""
	I1115 09:59:14.355656  539051 logs.go:282] 0 containers: []
	W1115 09:59:14.355664  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:59:14.355670  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:59:14.355716  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:59:14.385052  539051 cri.go:89] found id: "7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:14.385072  539051 cri.go:89] found id: ""
	I1115 09:59:14.385080  539051 logs.go:282] 1 containers: [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe]
	I1115 09:59:14.385139  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:14.389214  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:59:14.389278  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:59:14.416441  539051 cri.go:89] found id: ""
	I1115 09:59:14.416474  539051 logs.go:282] 0 containers: []
	W1115 09:59:14.416497  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:59:14.416506  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:59:14.416557  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:59:14.443820  539051 cri.go:89] found id: ""
	I1115 09:59:14.443848  539051 logs.go:282] 0 containers: []
	W1115 09:59:14.443858  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:59:14.443868  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:59:14.443882  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:14.495030  539051 logs.go:123] Gathering logs for kube-controller-manager [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe] ...
	I1115 09:59:14.495076  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:14.522454  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:59:14.522488  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:59:14.572308  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:59:14.572342  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:59:14.603675  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:59:14.603702  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1115 09:59:11.567034  589862 node_ready.go:57] node "no-preload-559401" has "Ready":"False" status (will retry)
	W1115 09:59:14.066799  589862 node_ready.go:57] node "no-preload-559401" has "Ready":"False" status (will retry)
	I1115 09:59:14.695821  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:59:14.695860  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:59:14.713245  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:59:14.713272  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:59:14.771641  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:59:14.771662  539051 logs.go:123] Gathering logs for kube-apiserver [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8] ...
	I1115 09:59:14.771678  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:17.305464  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:59:17.305946  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:59:17.305995  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:59:17.306043  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:59:17.338221  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:17.338254  539051 cri.go:89] found id: ""
	I1115 09:59:17.338278  539051 logs.go:282] 1 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8]
	I1115 09:59:17.338364  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:17.343113  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:59:17.343179  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:59:17.378694  539051 cri.go:89] found id: ""
	I1115 09:59:17.378724  539051 logs.go:282] 0 containers: []
	W1115 09:59:17.378732  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:59:17.378739  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:59:17.378805  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:59:17.406848  539051 cri.go:89] found id: ""
	I1115 09:59:17.406876  539051 logs.go:282] 0 containers: []
	W1115 09:59:17.406886  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:59:17.406894  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:59:17.406956  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:59:17.442334  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:17.442353  539051 cri.go:89] found id: ""
	I1115 09:59:17.442361  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:59:17.442430  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:17.447427  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:59:17.447542  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:59:17.479959  539051 cri.go:89] found id: ""
	I1115 09:59:17.479987  539051 logs.go:282] 0 containers: []
	W1115 09:59:17.479997  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:59:17.480005  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:59:17.480063  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:59:17.508380  539051 cri.go:89] found id: "7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:17.508414  539051 cri.go:89] found id: ""
	I1115 09:59:17.508424  539051 logs.go:282] 1 containers: [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe]
	I1115 09:59:17.508502  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:17.512636  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:59:17.512704  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:59:17.540710  539051 cri.go:89] found id: ""
	I1115 09:59:17.540744  539051 logs.go:282] 0 containers: []
	W1115 09:59:17.540758  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:59:17.540768  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:59:17.540832  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:59:17.568230  539051 cri.go:89] found id: ""
	I1115 09:59:17.568256  539051 logs.go:282] 0 containers: []
	W1115 09:59:17.568265  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:59:17.568278  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:59:17.568293  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:59:17.601164  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:59:17.601203  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:59:17.695944  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:59:17.695988  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:59:17.715279  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:59:17.715318  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:59:17.778979  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:59:17.779002  539051 logs.go:123] Gathering logs for kube-apiserver [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8] ...
	I1115 09:59:17.779018  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:17.812129  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:59:17.812162  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:17.867288  539051 logs.go:123] Gathering logs for kube-controller-manager [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe] ...
	I1115 09:59:17.867320  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:17.899784  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:59:17.899822  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1115 09:59:16.066843  589862 node_ready.go:57] node "no-preload-559401" has "Ready":"False" status (will retry)
	W1115 09:59:18.567168  589862 node_ready.go:57] node "no-preload-559401" has "Ready":"False" status (will retry)
	I1115 09:59:21.066687  589862 node_ready.go:49] node "no-preload-559401" is "Ready"
	I1115 09:59:21.066722  589862 node_ready.go:38] duration metric: took 13.503178836s for node "no-preload-559401" to be "Ready" ...
	I1115 09:59:21.066749  589862 api_server.go:52] waiting for apiserver process to appear ...
	I1115 09:59:21.066804  589862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:59:21.080766  589862 api_server.go:72] duration metric: took 13.796214881s to wait for apiserver process to appear ...
	I1115 09:59:21.080796  589862 api_server.go:88] waiting for apiserver healthz status ...
	I1115 09:59:21.080821  589862 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 09:59:21.087381  589862 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 09:59:21.088373  589862 api_server.go:141] control plane version: v1.34.1
	I1115 09:59:21.088410  589862 api_server.go:131] duration metric: took 7.606916ms to wait for apiserver health ...
	I1115 09:59:21.088419  589862 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 09:59:21.092020  589862 system_pods.go:59] 8 kube-system pods found
	I1115 09:59:21.092053  589862 system_pods.go:61] "coredns-66bc5c9577-dh55n" [582f90bb-ec3c-4d2b-aa98-31dc4cab6d88] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:59:21.092059  589862 system_pods.go:61] "etcd-no-preload-559401" [e7025cdd-c688-4362-8084-872a7cfc6892] Running
	I1115 09:59:21.092066  589862 system_pods.go:61] "kindnet-b5x55" [6fa4c5d0-7a46-4a00-ac93-cffc63d77181] Running
	I1115 09:59:21.092070  589862 system_pods.go:61] "kube-apiserver-no-preload-559401" [e50b2f14-28a5-40cb-bc47-bab55a554409] Running
	I1115 09:59:21.092074  589862 system_pods.go:61] "kube-controller-manager-no-preload-559401" [afbf24b8-a62c-4d28-90e7-6d87cbf7b8df] Running
	I1115 09:59:21.092078  589862 system_pods.go:61] "kube-proxy-sbk5r" [b4d77915-a105-43ac-bd1c-73bdf1bbcec4] Running
	I1115 09:59:21.092081  589862 system_pods.go:61] "kube-scheduler-no-preload-559401" [a3f29aec-6dc4-42cd-a788-e518558a0963] Running
	I1115 09:59:21.092086  589862 system_pods.go:61] "storage-provisioner" [8a18d053-ec9c-429e-b84b-5565c197d2a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:59:21.092092  589862 system_pods.go:74] duration metric: took 3.668271ms to wait for pod list to return data ...
	I1115 09:59:21.092103  589862 default_sa.go:34] waiting for default service account to be created ...
	I1115 09:59:21.094674  589862 default_sa.go:45] found service account: "default"
	I1115 09:59:21.094700  589862 default_sa.go:55] duration metric: took 2.589635ms for default service account to be created ...
	I1115 09:59:21.094713  589862 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 09:59:21.097909  589862 system_pods.go:86] 8 kube-system pods found
	I1115 09:59:21.097945  589862 system_pods.go:89] "coredns-66bc5c9577-dh55n" [582f90bb-ec3c-4d2b-aa98-31dc4cab6d88] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 09:59:21.097952  589862 system_pods.go:89] "etcd-no-preload-559401" [e7025cdd-c688-4362-8084-872a7cfc6892] Running
	I1115 09:59:21.097962  589862 system_pods.go:89] "kindnet-b5x55" [6fa4c5d0-7a46-4a00-ac93-cffc63d77181] Running
	I1115 09:59:21.097968  589862 system_pods.go:89] "kube-apiserver-no-preload-559401" [e50b2f14-28a5-40cb-bc47-bab55a554409] Running
	I1115 09:59:21.097974  589862 system_pods.go:89] "kube-controller-manager-no-preload-559401" [afbf24b8-a62c-4d28-90e7-6d87cbf7b8df] Running
	I1115 09:59:21.097980  589862 system_pods.go:89] "kube-proxy-sbk5r" [b4d77915-a105-43ac-bd1c-73bdf1bbcec4] Running
	I1115 09:59:21.097986  589862 system_pods.go:89] "kube-scheduler-no-preload-559401" [a3f29aec-6dc4-42cd-a788-e518558a0963] Running
	I1115 09:59:21.097996  589862 system_pods.go:89] "storage-provisioner" [8a18d053-ec9c-429e-b84b-5565c197d2a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 09:59:21.098030  589862 retry.go:31] will retry after 298.658929ms: missing components: kube-dns
	I1115 09:59:21.401201  589862 system_pods.go:86] 8 kube-system pods found
	I1115 09:59:21.401229  589862 system_pods.go:89] "coredns-66bc5c9577-dh55n" [582f90bb-ec3c-4d2b-aa98-31dc4cab6d88] Running
	I1115 09:59:21.401235  589862 system_pods.go:89] "etcd-no-preload-559401" [e7025cdd-c688-4362-8084-872a7cfc6892] Running
	I1115 09:59:21.401238  589862 system_pods.go:89] "kindnet-b5x55" [6fa4c5d0-7a46-4a00-ac93-cffc63d77181] Running
	I1115 09:59:21.401241  589862 system_pods.go:89] "kube-apiserver-no-preload-559401" [e50b2f14-28a5-40cb-bc47-bab55a554409] Running
	I1115 09:59:21.401246  589862 system_pods.go:89] "kube-controller-manager-no-preload-559401" [afbf24b8-a62c-4d28-90e7-6d87cbf7b8df] Running
	I1115 09:59:21.401251  589862 system_pods.go:89] "kube-proxy-sbk5r" [b4d77915-a105-43ac-bd1c-73bdf1bbcec4] Running
	I1115 09:59:21.401255  589862 system_pods.go:89] "kube-scheduler-no-preload-559401" [a3f29aec-6dc4-42cd-a788-e518558a0963] Running
	I1115 09:59:21.401260  589862 system_pods.go:89] "storage-provisioner" [8a18d053-ec9c-429e-b84b-5565c197d2a5] Running
	I1115 09:59:21.401270  589862 system_pods.go:126] duration metric: took 306.549869ms to wait for k8s-apps to be running ...
	I1115 09:59:21.401284  589862 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 09:59:21.401334  589862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:59:21.414524  589862 system_svc.go:56] duration metric: took 13.229174ms WaitForService to wait for kubelet
	I1115 09:59:21.414559  589862 kubeadm.go:587] duration metric: took 14.130017217s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 09:59:21.414585  589862 node_conditions.go:102] verifying NodePressure condition ...
	I1115 09:59:21.417512  589862 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 09:59:21.417546  589862 node_conditions.go:123] node cpu capacity is 8
	I1115 09:59:21.417561  589862 node_conditions.go:105] duration metric: took 2.970205ms to run NodePressure ...
	I1115 09:59:21.417577  589862 start.go:242] waiting for startup goroutines ...
	I1115 09:59:21.417587  589862 start.go:247] waiting for cluster config update ...
	I1115 09:59:21.417602  589862 start.go:256] writing updated cluster config ...
	I1115 09:59:21.417920  589862 ssh_runner.go:195] Run: rm -f paused
	I1115 09:59:21.422106  589862 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:59:21.425445  589862 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dh55n" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:21.429463  589862 pod_ready.go:94] pod "coredns-66bc5c9577-dh55n" is "Ready"
	I1115 09:59:21.429486  589862 pod_ready.go:86] duration metric: took 4.016319ms for pod "coredns-66bc5c9577-dh55n" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:21.431265  589862 pod_ready.go:83] waiting for pod "etcd-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:21.434677  589862 pod_ready.go:94] pod "etcd-no-preload-559401" is "Ready"
	I1115 09:59:21.434695  589862 pod_ready.go:86] duration metric: took 3.409925ms for pod "etcd-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:21.436366  589862 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:21.439708  589862 pod_ready.go:94] pod "kube-apiserver-no-preload-559401" is "Ready"
	I1115 09:59:21.439729  589862 pod_ready.go:86] duration metric: took 3.34397ms for pod "kube-apiserver-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:21.441331  589862 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:21.826433  589862 pod_ready.go:94] pod "kube-controller-manager-no-preload-559401" is "Ready"
	I1115 09:59:21.826464  589862 pod_ready.go:86] duration metric: took 385.115564ms for pod "kube-controller-manager-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:22.026968  589862 pod_ready.go:83] waiting for pod "kube-proxy-sbk5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:22.426355  589862 pod_ready.go:94] pod "kube-proxy-sbk5r" is "Ready"
	I1115 09:59:22.426382  589862 pod_ready.go:86] duration metric: took 399.379287ms for pod "kube-proxy-sbk5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:22.626292  589862 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:23.026241  589862 pod_ready.go:94] pod "kube-scheduler-no-preload-559401" is "Ready"
	I1115 09:59:23.026267  589862 pod_ready.go:86] duration metric: took 399.947442ms for pod "kube-scheduler-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 09:59:23.026278  589862 pod_ready.go:40] duration metric: took 1.604138597s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 09:59:23.071959  589862 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 09:59:23.074121  589862 out.go:179] * Done! kubectl is now configured to use "no-preload-559401" cluster and "default" namespace by default
	I1115 09:59:20.456837  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:59:20.457282  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:59:20.457337  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:59:20.457421  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:59:20.488514  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:20.488542  539051 cri.go:89] found id: ""
	I1115 09:59:20.488552  539051 logs.go:282] 1 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8]
	I1115 09:59:20.488615  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:20.492849  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:59:20.492926  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:59:20.523622  539051 cri.go:89] found id: ""
	I1115 09:59:20.523650  539051 logs.go:282] 0 containers: []
	W1115 09:59:20.523661  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:59:20.523670  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:59:20.523747  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:59:20.553426  539051 cri.go:89] found id: ""
	I1115 09:59:20.553459  539051 logs.go:282] 0 containers: []
	W1115 09:59:20.553495  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:59:20.553504  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:59:20.553575  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:59:20.582380  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:20.582434  539051 cri.go:89] found id: ""
	I1115 09:59:20.582444  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:59:20.582495  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:20.586607  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:59:20.586735  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:59:20.613567  539051 cri.go:89] found id: ""
	I1115 09:59:20.613592  539051 logs.go:282] 0 containers: []
	W1115 09:59:20.613603  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:59:20.613610  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:59:20.613679  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:59:20.643413  539051 cri.go:89] found id: "7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:20.643441  539051 cri.go:89] found id: ""
	I1115 09:59:20.643453  539051 logs.go:282] 1 containers: [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe]
	I1115 09:59:20.643518  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:20.648201  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:59:20.648273  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:59:20.684000  539051 cri.go:89] found id: ""
	I1115 09:59:20.684028  539051 logs.go:282] 0 containers: []
	W1115 09:59:20.684039  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:59:20.684049  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:59:20.684117  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:59:20.712275  539051 cri.go:89] found id: ""
	I1115 09:59:20.712304  539051 logs.go:282] 0 containers: []
	W1115 09:59:20.712316  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:59:20.712329  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:59:20.712357  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:20.764791  539051 logs.go:123] Gathering logs for kube-controller-manager [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe] ...
	I1115 09:59:20.764825  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:20.792557  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:59:20.792585  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:59:20.845747  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:59:20.845799  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:59:20.877867  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:59:20.877895  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:59:20.967108  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:59:20.967144  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:59:20.988339  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:59:20.988381  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:59:21.064015  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:59:21.064049  539051 logs.go:123] Gathering logs for kube-apiserver [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8] ...
	I1115 09:59:21.064068  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:23.604717  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:59:23.605188  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:59:23.605251  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:59:23.605313  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:59:23.633225  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:23.633248  539051 cri.go:89] found id: ""
	I1115 09:59:23.633257  539051 logs.go:282] 1 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8]
	I1115 09:59:23.633305  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:23.637672  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:59:23.637751  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:59:23.666969  539051 cri.go:89] found id: ""
	I1115 09:59:23.666994  539051 logs.go:282] 0 containers: []
	W1115 09:59:23.667004  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:59:23.667012  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:59:23.667072  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:59:23.695707  539051 cri.go:89] found id: ""
	I1115 09:59:23.695737  539051 logs.go:282] 0 containers: []
	W1115 09:59:23.695747  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:59:23.695756  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:59:23.695828  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:59:23.724100  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:23.724120  539051 cri.go:89] found id: ""
	I1115 09:59:23.724128  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:59:23.724178  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:23.728155  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:59:23.728228  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:59:23.755379  539051 cri.go:89] found id: ""
	I1115 09:59:23.755417  539051 logs.go:282] 0 containers: []
	W1115 09:59:23.755425  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:59:23.755431  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:59:23.755500  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:59:23.781350  539051 cri.go:89] found id: "7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:23.781375  539051 cri.go:89] found id: ""
	I1115 09:59:23.781384  539051 logs.go:282] 1 containers: [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe]
	I1115 09:59:23.781475  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:23.785876  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:59:23.785948  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:59:23.812666  539051 cri.go:89] found id: ""
	I1115 09:59:23.812699  539051 logs.go:282] 0 containers: []
	W1115 09:59:23.812709  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:59:23.812716  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:59:23.812769  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:59:23.840270  539051 cri.go:89] found id: ""
	I1115 09:59:23.840300  539051 logs.go:282] 0 containers: []
	W1115 09:59:23.840311  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:59:23.840337  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:59:23.840356  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:59:23.937012  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:59:23.937059  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:59:23.954862  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:59:23.954895  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:59:24.013742  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:59:24.013764  539051 logs.go:123] Gathering logs for kube-apiserver [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8] ...
	I1115 09:59:24.013778  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:24.046867  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:59:24.046910  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:24.100398  539051 logs.go:123] Gathering logs for kube-controller-manager [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe] ...
	I1115 09:59:24.100465  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:24.128144  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:59:24.128177  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:59:24.181605  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:59:24.181654  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1115 09:59:26.713345  539051 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 09:59:26.713852  539051 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1115 09:59:26.713926  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1115 09:59:26.713983  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1115 09:59:26.742629  539051 cri.go:89] found id: "da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:26.742649  539051 cri.go:89] found id: ""
	I1115 09:59:26.742657  539051 logs.go:282] 1 containers: [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8]
	I1115 09:59:26.742710  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:26.746867  539051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1115 09:59:26.746928  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1115 09:59:26.774549  539051 cri.go:89] found id: ""
	I1115 09:59:26.774575  539051 logs.go:282] 0 containers: []
	W1115 09:59:26.774584  539051 logs.go:284] No container was found matching "etcd"
	I1115 09:59:26.774592  539051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1115 09:59:26.774651  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1115 09:59:26.802222  539051 cri.go:89] found id: ""
	I1115 09:59:26.802250  539051 logs.go:282] 0 containers: []
	W1115 09:59:26.802261  539051 logs.go:284] No container was found matching "coredns"
	I1115 09:59:26.802269  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1115 09:59:26.802330  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1115 09:59:26.829589  539051 cri.go:89] found id: "985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:26.829613  539051 cri.go:89] found id: ""
	I1115 09:59:26.829621  539051 logs.go:282] 1 containers: [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4]
	I1115 09:59:26.829669  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:26.833591  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1115 09:59:26.833654  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1115 09:59:26.860653  539051 cri.go:89] found id: ""
	I1115 09:59:26.860679  539051 logs.go:282] 0 containers: []
	W1115 09:59:26.860686  539051 logs.go:284] No container was found matching "kube-proxy"
	I1115 09:59:26.860693  539051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1115 09:59:26.860748  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1115 09:59:26.889232  539051 cri.go:89] found id: "7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:26.889257  539051 cri.go:89] found id: ""
	I1115 09:59:26.889266  539051 logs.go:282] 1 containers: [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe]
	I1115 09:59:26.889317  539051 ssh_runner.go:195] Run: which crictl
	I1115 09:59:26.893288  539051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1115 09:59:26.893356  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1115 09:59:26.921319  539051 cri.go:89] found id: ""
	I1115 09:59:26.921347  539051 logs.go:282] 0 containers: []
	W1115 09:59:26.921356  539051 logs.go:284] No container was found matching "kindnet"
	I1115 09:59:26.921362  539051 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1115 09:59:26.921432  539051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1115 09:59:26.949230  539051 cri.go:89] found id: ""
	I1115 09:59:26.949259  539051 logs.go:282] 0 containers: []
	W1115 09:59:26.949269  539051 logs.go:284] No container was found matching "storage-provisioner"
	I1115 09:59:26.949282  539051 logs.go:123] Gathering logs for kubelet ...
	I1115 09:59:26.949297  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1115 09:59:27.038892  539051 logs.go:123] Gathering logs for dmesg ...
	I1115 09:59:27.038937  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1115 09:59:27.055692  539051 logs.go:123] Gathering logs for describe nodes ...
	I1115 09:59:27.055722  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1115 09:59:27.116922  539051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1115 09:59:27.116943  539051 logs.go:123] Gathering logs for kube-apiserver [da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8] ...
	I1115 09:59:27.116956  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da3cd48d1e249256e34f4bcfd9e3feb145ed1ac7abc8310d77c79ff3c66a47a8"
	I1115 09:59:27.152638  539051 logs.go:123] Gathering logs for kube-scheduler [985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4] ...
	I1115 09:59:27.152673  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 985b665ec1a928229676e0e490b6af3b501d8b342ce051c4431cf4576df2cbd4"
	I1115 09:59:27.207666  539051 logs.go:123] Gathering logs for kube-controller-manager [7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe] ...
	I1115 09:59:27.207708  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f95a64c69b00379b916574d84d647e7e0294316d9553340dbe2b3cd319dfbfe"
	I1115 09:59:27.236547  539051 logs.go:123] Gathering logs for CRI-O ...
	I1115 09:59:27.236576  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1115 09:59:27.288761  539051 logs.go:123] Gathering logs for container status ...
	I1115 09:59:27.288806  539051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Nov 15 09:59:21 no-preload-559401 crio[771]: time="2025-11-15T09:59:21.024412389Z" level=info msg="Started container" PID=2929 containerID=588e10127f47a891ae86b67b0e07d4c4066c82070c1e3f466483e36bfdebfa66 description=kube-system/storage-provisioner/storage-provisioner id=cfdb4e4d-e2d2-4d13-a4d2-37ab128f066a name=/runtime.v1.RuntimeService/StartContainer sandboxID=fee68a08b6e6cdaa8fab8ff58f0ed0690c90a31d5bc2bae32e85d29178d8b5d0
	Nov 15 09:59:21 no-preload-559401 crio[771]: time="2025-11-15T09:59:21.024657532Z" level=info msg="Started container" PID=2933 containerID=b5aea229ebbdd2a868a8b57dd4896dabdf0f63750ee95f35772a9c870a475058 description=kube-system/coredns-66bc5c9577-dh55n/coredns id=99543d58-c8e1-4fca-b9dc-47dc68ae24ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=fb0b2d876000ce4b86ece9c61e06c2df08137b3e168e46e20f45d300fcfe7863
	Nov 15 09:59:23 no-preload-559401 crio[771]: time="2025-11-15T09:59:23.522620842Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9083d687-fa0d-4822-87e8-d91cd755a6d7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 09:59:23 no-preload-559401 crio[771]: time="2025-11-15T09:59:23.52272237Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:59:23 no-preload-559401 crio[771]: time="2025-11-15T09:59:23.527509381Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:67cc183893b52aeedd7f9946e04a811b29c7a3c580e89605287399fdc8a83fe8 UID:4972a866-c48a-427f-8837-dd6d8889a805 NetNS:/var/run/netns/1c081ad1-e6e1-4e35-b6cc-e65d53cc8a9c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00051ac50}] Aliases:map[]}"
	Nov 15 09:59:23 no-preload-559401 crio[771]: time="2025-11-15T09:59:23.527538568Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 09:59:23 no-preload-559401 crio[771]: time="2025-11-15T09:59:23.537035354Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:67cc183893b52aeedd7f9946e04a811b29c7a3c580e89605287399fdc8a83fe8 UID:4972a866-c48a-427f-8837-dd6d8889a805 NetNS:/var/run/netns/1c081ad1-e6e1-4e35-b6cc-e65d53cc8a9c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00051ac50}] Aliases:map[]}"
	Nov 15 09:59:23 no-preload-559401 crio[771]: time="2025-11-15T09:59:23.537158459Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 09:59:23 no-preload-559401 crio[771]: time="2025-11-15T09:59:23.537988081Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 15 09:59:23 no-preload-559401 crio[771]: time="2025-11-15T09:59:23.538834028Z" level=info msg="Ran pod sandbox 67cc183893b52aeedd7f9946e04a811b29c7a3c580e89605287399fdc8a83fe8 with infra container: default/busybox/POD" id=9083d687-fa0d-4822-87e8-d91cd755a6d7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 09:59:23 no-preload-559401 crio[771]: time="2025-11-15T09:59:23.540037284Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b0808db1-2813-4d6b-9ebd-e9631f590ca4 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:59:23 no-preload-559401 crio[771]: time="2025-11-15T09:59:23.540152293Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b0808db1-2813-4d6b-9ebd-e9631f590ca4 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:59:23 no-preload-559401 crio[771]: time="2025-11-15T09:59:23.540190454Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b0808db1-2813-4d6b-9ebd-e9631f590ca4 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:59:23 no-preload-559401 crio[771]: time="2025-11-15T09:59:23.540832566Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6e8b9172-2715-4efa-a1f0-06f2f1bb2132 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:59:23 no-preload-559401 crio[771]: time="2025-11-15T09:59:23.54238635Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 09:59:25 no-preload-559401 crio[771]: time="2025-11-15T09:59:25.754280749Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=6e8b9172-2715-4efa-a1f0-06f2f1bb2132 name=/runtime.v1.ImageService/PullImage
	Nov 15 09:59:25 no-preload-559401 crio[771]: time="2025-11-15T09:59:25.754946214Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b6462a36-8d19-464c-88b5-62d0b9dcb708 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:59:25 no-preload-559401 crio[771]: time="2025-11-15T09:59:25.756269387Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3f5b5e5e-9163-434a-a5b9-6e6d32a17752 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 09:59:25 no-preload-559401 crio[771]: time="2025-11-15T09:59:25.759429965Z" level=info msg="Creating container: default/busybox/busybox" id=d2fb82eb-d790-49ba-a30d-478c4420c2c2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:59:25 no-preload-559401 crio[771]: time="2025-11-15T09:59:25.759567304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:59:25 no-preload-559401 crio[771]: time="2025-11-15T09:59:25.764046348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:59:25 no-preload-559401 crio[771]: time="2025-11-15T09:59:25.764486048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 09:59:25 no-preload-559401 crio[771]: time="2025-11-15T09:59:25.790107101Z" level=info msg="Created container 62a91abc71b1ba584c04efdb16684905ea21baf2071936d45611e355b0a315e8: default/busybox/busybox" id=d2fb82eb-d790-49ba-a30d-478c4420c2c2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 09:59:25 no-preload-559401 crio[771]: time="2025-11-15T09:59:25.79073727Z" level=info msg="Starting container: 62a91abc71b1ba584c04efdb16684905ea21baf2071936d45611e355b0a315e8" id=f94143d4-9b66-444e-af5b-4fdf317d1161 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 09:59:25 no-preload-559401 crio[771]: time="2025-11-15T09:59:25.792441229Z" level=info msg="Started container" PID=3008 containerID=62a91abc71b1ba584c04efdb16684905ea21baf2071936d45611e355b0a315e8 description=default/busybox/busybox id=f94143d4-9b66-444e-af5b-4fdf317d1161 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67cc183893b52aeedd7f9946e04a811b29c7a3c580e89605287399fdc8a83fe8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	62a91abc71b1b       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   67cc183893b52       busybox                                     default
	b5aea229ebbdd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   fb0b2d876000c       coredns-66bc5c9577-dh55n                    kube-system
	588e10127f47a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   fee68a08b6e6c       storage-provisioner                         kube-system
	e2257e64d7e33       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   3b26f0f18ac5e       kindnet-b5x55                               kube-system
	a696f1248eed5       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   861e46a473710       kube-proxy-sbk5r                            kube-system
	118ed197c237c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      36 seconds ago      Running             kube-scheduler            0                   cca2037c5f0ad       kube-scheduler-no-preload-559401            kube-system
	a3e9418a48f69       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      36 seconds ago      Running             kube-controller-manager   0                   5dce35155435c       kube-controller-manager-no-preload-559401   kube-system
	8a5febcb768d0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      36 seconds ago      Running             kube-apiserver            0                   2c220f1058a91       kube-apiserver-no-preload-559401            kube-system
	ce072f01b1abe       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      36 seconds ago      Running             etcd                      0                   7461a9b9dff04       etcd-no-preload-559401                      kube-system
	
	
	==> coredns [b5aea229ebbdd2a868a8b57dd4896dabdf0f63750ee95f35772a9c870a475058] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52194 - 64836 "HINFO IN 5180235446754342059.1389150066334824028. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023101664s
	
	
	==> describe nodes <==
	Name:               no-preload-559401
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-559401
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=no-preload-559401
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_59_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:58:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-559401
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:59:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:59:31 +0000   Sat, 15 Nov 2025 09:58:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:59:31 +0000   Sat, 15 Nov 2025 09:58:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:59:31 +0000   Sat, 15 Nov 2025 09:58:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:59:31 +0000   Sat, 15 Nov 2025 09:59:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-559401
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                952f299f-14db-4c2b-b6e4-27ef9280d1fa
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-dh55n                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-559401                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-b5x55                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-559401             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-559401    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-sbk5r                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-559401             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node no-preload-559401 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node no-preload-559401 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node no-preload-559401 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node no-preload-559401 event: Registered Node no-preload-559401 in Controller
	  Normal  NodeReady                12s   kubelet          Node no-preload-559401 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [ce072f01b1abef930b2352c4c9bba6a6716265999890ccb38ecc13ced54ea618] <==
	{"level":"warn","ts":"2025-11-15T09:58:57.556376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.564663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.571490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.585646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.592867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.600178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.610563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.616539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.623182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.630562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.636922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.644330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.650496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.656824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.663445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.670826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.677578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.685382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.705770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.715117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.722653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:58:57.771621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49174","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T09:58:59.482182Z","caller":"traceutil/trace.go:172","msg":"trace[154469987] transaction","detail":"{read_only:false; response_revision:109; number_of_response:1; }","duration":"127.748491ms","start":"2025-11-15T09:58:59.354412Z","end":"2025-11-15T09:58:59.482161Z","steps":["trace[154469987] 'process raft request'  (duration: 119.925316ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T09:58:59.740039Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.113576ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790031503391797 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:controller:ephemeral-volume-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:controller:ephemeral-volume-controller\" value_size:670 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-15T09:58:59.740159Z","caller":"traceutil/trace.go:172","msg":"trace[1698385401] transaction","detail":"{read_only:false; response_revision:111; number_of_response:1; }","duration":"189.287891ms","start":"2025-11-15T09:58:59.550856Z","end":"2025-11-15T09:58:59.740144Z","steps":["trace[1698385401] 'process raft request'  (duration: 59.766463ms)","trace[1698385401] 'compare'  (duration: 129.004598ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:59:32 up  1:41,  0 user,  load average: 2.08, 2.30, 1.62
	Linux no-preload-559401 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e2257e64d7e338b0435b689be3ba0da8419f9ebc3daaaf145d7157636bae220f] <==
	I1115 09:59:10.163551       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 09:59:10.163802       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1115 09:59:10.163946       1 main.go:148] setting mtu 1500 for CNI 
	I1115 09:59:10.163962       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 09:59:10.163980       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T09:59:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 09:59:10.462143       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 09:59:10.462221       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 09:59:10.462239       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 09:59:10.462584       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 09:59:10.662906       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 09:59:10.662938       1 metrics.go:72] Registering metrics
	I1115 09:59:10.663087       1 controller.go:711] "Syncing nftables rules"
	I1115 09:59:20.470479       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 09:59:20.470540       1 main.go:301] handling current node
	I1115 09:59:30.465365       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 09:59:30.465463       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8a5febcb768d0080b2c36c81d52fc3dcec486020a7715dcc897d9af0444755b0] <==
	I1115 09:58:58.227238       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 09:58:58.231160       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 09:58:58.231883       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 09:58:58.236473       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 09:58:58.238829       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 09:58:58.239259       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 09:58:58.418180       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:58:59.130182       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 09:58:59.135336       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 09:58:59.135370       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 09:59:00.177963       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 09:59:00.215575       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 09:59:00.334918       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 09:59:00.340689       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1115 09:59:00.341632       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 09:59:00.345775       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 09:59:01.153983       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 09:59:01.244881       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 09:59:01.255155       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 09:59:01.262850       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 09:59:06.155689       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 09:59:06.911342       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 09:59:06.916140       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 09:59:07.308322       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1115 09:59:31.306000       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:54830: use of closed network connection
	
	
	==> kube-controller-manager [a3e9418a48f698f5a8ec3cf3fe61059153104576b39e8eda9ee14ffb6ab51bd4] <==
	I1115 09:59:06.152492       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 09:59:06.152525       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 09:59:06.152529       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 09:59:06.153124       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 09:59:06.153160       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 09:59:06.153217       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 09:59:06.153358       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 09:59:06.153443       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 09:59:06.153507       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-559401"
	I1115 09:59:06.153527       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 09:59:06.153553       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 09:59:06.153622       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 09:59:06.153682       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 09:59:06.153720       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 09:59:06.153839       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 09:59:06.153862       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 09:59:06.154049       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 09:59:06.154668       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 09:59:06.154686       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 09:59:06.154742       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 09:59:06.155092       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 09:59:06.156735       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 09:59:06.158981       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:59:06.175067       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 09:59:21.156054       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a696f1248eed50cb89df2e138e29962634ee81e31d3ef2cb38a81d57fab7038a] <==
	I1115 09:59:07.747234       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:59:07.831829       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:59:07.932898       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:59:07.932935       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1115 09:59:07.933052       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:59:07.953198       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:59:07.953249       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:59:07.958782       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:59:07.959200       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:59:07.959264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:59:07.960872       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:59:07.960896       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:59:07.960911       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:59:07.960975       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:59:07.961276       1 config.go:200] "Starting service config controller"
	I1115 09:59:07.961297       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:59:07.961362       1 config.go:309] "Starting node config controller"
	I1115 09:59:07.961449       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:59:07.961460       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:59:08.061246       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 09:59:08.061292       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 09:59:08.061354       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [118ed197c237c9b6056ac82b247ee4647906d11c60d1b93adb4432952e274609] <==
	E1115 09:58:58.178127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:58:58.178191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:58:58.178188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:58:58.178199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:58:58.178282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:58:58.990697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:58:59.085550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 09:58:59.106859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:58:59.116080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:58:59.162141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:58:59.166139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 09:58:59.192904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:58:59.226068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:58:59.254245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:58:59.274412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:58:59.276367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:58:59.321974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:58:59.326296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 09:58:59.349922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:58:59.390609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:58:59.598755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:58:59.666379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:58:59.672527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:58:59.692199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1115 09:59:02.475581       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 09:59:02 no-preload-559401 kubelet[2304]: I1115 09:59:02.187283    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-559401" podStartSLOduration=3.187263486 podStartE2EDuration="3.187263486s" podCreationTimestamp="2025-11-15 09:58:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:59:02.176827814 +0000 UTC m=+1.158354585" watchObservedRunningTime="2025-11-15 09:59:02.187263486 +0000 UTC m=+1.168790256"
	Nov 15 09:59:02 no-preload-559401 kubelet[2304]: I1115 09:59:02.187488    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-559401" podStartSLOduration=1.187473797 podStartE2EDuration="1.187473797s" podCreationTimestamp="2025-11-15 09:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:59:02.187218881 +0000 UTC m=+1.168745656" watchObservedRunningTime="2025-11-15 09:59:02.187473797 +0000 UTC m=+1.169000570"
	Nov 15 09:59:02 no-preload-559401 kubelet[2304]: I1115 09:59:02.208689    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-559401" podStartSLOduration=2.208658704 podStartE2EDuration="2.208658704s" podCreationTimestamp="2025-11-15 09:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:59:02.197042827 +0000 UTC m=+1.178569601" watchObservedRunningTime="2025-11-15 09:59:02.208658704 +0000 UTC m=+1.190185476"
	Nov 15 09:59:02 no-preload-559401 kubelet[2304]: I1115 09:59:02.219007    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-559401" podStartSLOduration=1.218984024 podStartE2EDuration="1.218984024s" podCreationTimestamp="2025-11-15 09:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:59:02.208864316 +0000 UTC m=+1.190391088" watchObservedRunningTime="2025-11-15 09:59:02.218984024 +0000 UTC m=+1.200510796"
	Nov 15 09:59:06 no-preload-559401 kubelet[2304]: I1115 09:59:06.239606    2304 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 15 09:59:06 no-preload-559401 kubelet[2304]: I1115 09:59:06.240354    2304 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 09:59:07 no-preload-559401 kubelet[2304]: I1115 09:59:07.428143    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fa4c5d0-7a46-4a00-ac93-cffc63d77181-lib-modules\") pod \"kindnet-b5x55\" (UID: \"6fa4c5d0-7a46-4a00-ac93-cffc63d77181\") " pod="kube-system/kindnet-b5x55"
	Nov 15 09:59:07 no-preload-559401 kubelet[2304]: I1115 09:59:07.428196    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b4d77915-a105-43ac-bd1c-73bdf1bbcec4-kube-proxy\") pod \"kube-proxy-sbk5r\" (UID: \"b4d77915-a105-43ac-bd1c-73bdf1bbcec4\") " pod="kube-system/kube-proxy-sbk5r"
	Nov 15 09:59:07 no-preload-559401 kubelet[2304]: I1115 09:59:07.428221    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4d77915-a105-43ac-bd1c-73bdf1bbcec4-xtables-lock\") pod \"kube-proxy-sbk5r\" (UID: \"b4d77915-a105-43ac-bd1c-73bdf1bbcec4\") " pod="kube-system/kube-proxy-sbk5r"
	Nov 15 09:59:07 no-preload-559401 kubelet[2304]: I1115 09:59:07.428240    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4d77915-a105-43ac-bd1c-73bdf1bbcec4-lib-modules\") pod \"kube-proxy-sbk5r\" (UID: \"b4d77915-a105-43ac-bd1c-73bdf1bbcec4\") " pod="kube-system/kube-proxy-sbk5r"
	Nov 15 09:59:07 no-preload-559401 kubelet[2304]: I1115 09:59:07.428264    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fa4c5d0-7a46-4a00-ac93-cffc63d77181-xtables-lock\") pod \"kindnet-b5x55\" (UID: \"6fa4c5d0-7a46-4a00-ac93-cffc63d77181\") " pod="kube-system/kindnet-b5x55"
	Nov 15 09:59:07 no-preload-559401 kubelet[2304]: I1115 09:59:07.428286    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87dpm\" (UniqueName: \"kubernetes.io/projected/b4d77915-a105-43ac-bd1c-73bdf1bbcec4-kube-api-access-87dpm\") pod \"kube-proxy-sbk5r\" (UID: \"b4d77915-a105-43ac-bd1c-73bdf1bbcec4\") " pod="kube-system/kube-proxy-sbk5r"
	Nov 15 09:59:07 no-preload-559401 kubelet[2304]: I1115 09:59:07.428352    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6fa4c5d0-7a46-4a00-ac93-cffc63d77181-cni-cfg\") pod \"kindnet-b5x55\" (UID: \"6fa4c5d0-7a46-4a00-ac93-cffc63d77181\") " pod="kube-system/kindnet-b5x55"
	Nov 15 09:59:07 no-preload-559401 kubelet[2304]: I1115 09:59:07.428417    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-677m5\" (UniqueName: \"kubernetes.io/projected/6fa4c5d0-7a46-4a00-ac93-cffc63d77181-kube-api-access-677m5\") pod \"kindnet-b5x55\" (UID: \"6fa4c5d0-7a46-4a00-ac93-cffc63d77181\") " pod="kube-system/kindnet-b5x55"
	Nov 15 09:59:08 no-preload-559401 kubelet[2304]: I1115 09:59:08.144911    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sbk5r" podStartSLOduration=1.144893406 podStartE2EDuration="1.144893406s" podCreationTimestamp="2025-11-15 09:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:59:08.144740563 +0000 UTC m=+7.126267336" watchObservedRunningTime="2025-11-15 09:59:08.144893406 +0000 UTC m=+7.126420177"
	Nov 15 09:59:10 no-preload-559401 kubelet[2304]: I1115 09:59:10.152681    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-b5x55" podStartSLOduration=0.88977867 podStartE2EDuration="3.152660056s" podCreationTimestamp="2025-11-15 09:59:07 +0000 UTC" firstStartedPulling="2025-11-15 09:59:07.659227512 +0000 UTC m=+6.640754281" lastFinishedPulling="2025-11-15 09:59:09.922108904 +0000 UTC m=+8.903635667" observedRunningTime="2025-11-15 09:59:10.15225972 +0000 UTC m=+9.133786493" watchObservedRunningTime="2025-11-15 09:59:10.152660056 +0000 UTC m=+9.134186832"
	Nov 15 09:59:20 no-preload-559401 kubelet[2304]: I1115 09:59:20.643190    2304 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 09:59:20 no-preload-559401 kubelet[2304]: I1115 09:59:20.731713    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/582f90bb-ec3c-4d2b-aa98-31dc4cab6d88-config-volume\") pod \"coredns-66bc5c9577-dh55n\" (UID: \"582f90bb-ec3c-4d2b-aa98-31dc4cab6d88\") " pod="kube-system/coredns-66bc5c9577-dh55n"
	Nov 15 09:59:20 no-preload-559401 kubelet[2304]: I1115 09:59:20.731780    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8a18d053-ec9c-429e-b84b-5565c197d2a5-tmp\") pod \"storage-provisioner\" (UID: \"8a18d053-ec9c-429e-b84b-5565c197d2a5\") " pod="kube-system/storage-provisioner"
	Nov 15 09:59:20 no-preload-559401 kubelet[2304]: I1115 09:59:20.731811    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bz5b\" (UniqueName: \"kubernetes.io/projected/8a18d053-ec9c-429e-b84b-5565c197d2a5-kube-api-access-7bz5b\") pod \"storage-provisioner\" (UID: \"8a18d053-ec9c-429e-b84b-5565c197d2a5\") " pod="kube-system/storage-provisioner"
	Nov 15 09:59:20 no-preload-559401 kubelet[2304]: I1115 09:59:20.731837    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6mgv\" (UniqueName: \"kubernetes.io/projected/582f90bb-ec3c-4d2b-aa98-31dc4cab6d88-kube-api-access-b6mgv\") pod \"coredns-66bc5c9577-dh55n\" (UID: \"582f90bb-ec3c-4d2b-aa98-31dc4cab6d88\") " pod="kube-system/coredns-66bc5c9577-dh55n"
	Nov 15 09:59:21 no-preload-559401 kubelet[2304]: I1115 09:59:21.191144    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dh55n" podStartSLOduration=14.191120135 podStartE2EDuration="14.191120135s" podCreationTimestamp="2025-11-15 09:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:59:21.191013152 +0000 UTC m=+20.172539936" watchObservedRunningTime="2025-11-15 09:59:21.191120135 +0000 UTC m=+20.172646907"
	Nov 15 09:59:21 no-preload-559401 kubelet[2304]: I1115 09:59:21.191279    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.191271125 podStartE2EDuration="14.191271125s" podCreationTimestamp="2025-11-15 09:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 09:59:21.17931427 +0000 UTC m=+20.160841042" watchObservedRunningTime="2025-11-15 09:59:21.191271125 +0000 UTC m=+20.172797897"
	Nov 15 09:59:23 no-preload-559401 kubelet[2304]: I1115 09:59:23.346146    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k875j\" (UniqueName: \"kubernetes.io/projected/4972a866-c48a-427f-8837-dd6d8889a805-kube-api-access-k875j\") pod \"busybox\" (UID: \"4972a866-c48a-427f-8837-dd6d8889a805\") " pod="default/busybox"
	Nov 15 09:59:26 no-preload-559401 kubelet[2304]: I1115 09:59:26.194277    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.978883732 podStartE2EDuration="3.194256459s" podCreationTimestamp="2025-11-15 09:59:23 +0000 UTC" firstStartedPulling="2025-11-15 09:59:23.540413707 +0000 UTC m=+22.521940463" lastFinishedPulling="2025-11-15 09:59:25.755786436 +0000 UTC m=+24.737313190" observedRunningTime="2025-11-15 09:59:26.194254417 +0000 UTC m=+25.175781187" watchObservedRunningTime="2025-11-15 09:59:26.194256459 +0000 UTC m=+25.175783230"
	
	
	==> storage-provisioner [588e10127f47a891ae86b67b0e07d4c4066c82070c1e3f466483e36bfdebfa66] <==
	I1115 09:59:21.039580       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 09:59:21.049009       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 09:59:21.049086       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 09:59:21.051727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:21.057434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 09:59:21.057670       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 09:59:21.057925       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-559401_7b19fecd-b292-47f9-8db7-f31206ab73df!
	I1115 09:59:21.057946       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"74ac0aca-4a5f-408d-9b7f-c3e70ed087ad", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-559401_7b19fecd-b292-47f9-8db7-f31206ab73df became leader
	W1115 09:59:21.061219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:21.066845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 09:59:21.158075       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-559401_7b19fecd-b292-47f9-8db7-f31206ab73df!
	W1115 09:59:23.070311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:23.074032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:25.076772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:25.080276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:27.083089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:27.087020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:29.089642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:29.093478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:31.096926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:59:31.100653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-559401 -n no-preload-559401
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-559401 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-335655 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-335655 --alsologtostderr -v=1: exit status 80 (1.89370397s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-335655 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:00:33.077562  610586 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:00:33.077841  610586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:00:33.077851  610586 out.go:374] Setting ErrFile to fd 2...
	I1115 10:00:33.077855  610586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:00:33.078071  610586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:00:33.078298  610586 out.go:368] Setting JSON to false
	I1115 10:00:33.078349  610586 mustload.go:66] Loading cluster: old-k8s-version-335655
	I1115 10:00:33.078705  610586 config.go:182] Loaded profile config "old-k8s-version-335655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:00:33.079119  610586 cli_runner.go:164] Run: docker container inspect old-k8s-version-335655 --format={{.State.Status}}
	I1115 10:00:33.101348  610586 host.go:66] Checking if "old-k8s-version-335655" exists ...
	I1115 10:00:33.101689  610586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:00:33.169614  610586 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-15 10:00:33.156293539 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:00:33.170566  610586 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-335655 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:00:33.176571  610586 out.go:179] * Pausing node old-k8s-version-335655 ... 
	I1115 10:00:33.181564  610586 host.go:66] Checking if "old-k8s-version-335655" exists ...
	I1115 10:00:33.181908  610586 ssh_runner.go:195] Run: systemctl --version
	I1115 10:00:33.181954  610586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-335655
	I1115 10:00:33.201567  610586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/old-k8s-version-335655/id_rsa Username:docker}
	I1115 10:00:33.296486  610586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:00:33.309804  610586 pause.go:52] kubelet running: true
	I1115 10:00:33.309875  610586 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:00:33.490197  610586 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:00:33.490323  610586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:00:33.587952  610586 cri.go:89] found id: "a29f139a8109040ea93e6b686169546ab3f7572e5964c616ddfe4b0109c18e09"
	I1115 10:00:33.587979  610586 cri.go:89] found id: "ac30ed88cef6c844e51a8a22ea8e22de89811d932dce6e1c6e5cc0b93c9e14b2"
	I1115 10:00:33.587985  610586 cri.go:89] found id: "831cf76b7844ee6e290663629081fd160f5eee162570c153fa316a7695614da3"
	I1115 10:00:33.587990  610586 cri.go:89] found id: "b2da0d5358c4a17df789e6829fc9570ec901a6648cf6554feac6498f10accaa1"
	I1115 10:00:33.587994  610586 cri.go:89] found id: "766f51f768df62ae9a4d892911a3e4b3efb88576a90fdfbb7eadf4ae1879169c"
	I1115 10:00:33.587998  610586 cri.go:89] found id: "8e7e9bd77bc1f1f89b796930001c5a1902359d0cf7e181bc548e5bc2a4ee0988"
	I1115 10:00:33.588003  610586 cri.go:89] found id: "4f36f52df9e1823d0f8b7fcb1bd85954b910702e8d94abe040010ef7749c5652"
	I1115 10:00:33.588007  610586 cri.go:89] found id: "bf66c3337cc33c38b50cd84c0408339ca358893b510f2a8a1222686d78ed613c"
	I1115 10:00:33.588010  610586 cri.go:89] found id: "b1fb5f089cc60d72969c503d4ac81cc9dad2cb2197b8fdf047b094dd5609c21c"
	I1115 10:00:33.588023  610586 cri.go:89] found id: "9da1621ca096e17a0f14c287b16cae84d8be8bc11dc4c323425e15c1db1d75d7"
	I1115 10:00:33.588031  610586 cri.go:89] found id: "1b0d120a2950f97fb086bc8728a2fc50b1cc4017835ef14197769a9e88ee301b"
	I1115 10:00:33.588035  610586 cri.go:89] found id: ""
	I1115 10:00:33.588078  610586 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:00:33.603637  610586 retry.go:31] will retry after 335.493949ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:00:33Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:00:33.940244  610586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:00:33.959158  610586 pause.go:52] kubelet running: false
	I1115 10:00:33.959224  610586 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:00:34.115330  610586 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:00:34.115426  610586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:00:34.183162  610586 cri.go:89] found id: "a29f139a8109040ea93e6b686169546ab3f7572e5964c616ddfe4b0109c18e09"
	I1115 10:00:34.183183  610586 cri.go:89] found id: "ac30ed88cef6c844e51a8a22ea8e22de89811d932dce6e1c6e5cc0b93c9e14b2"
	I1115 10:00:34.183188  610586 cri.go:89] found id: "831cf76b7844ee6e290663629081fd160f5eee162570c153fa316a7695614da3"
	I1115 10:00:34.183192  610586 cri.go:89] found id: "b2da0d5358c4a17df789e6829fc9570ec901a6648cf6554feac6498f10accaa1"
	I1115 10:00:34.183196  610586 cri.go:89] found id: "766f51f768df62ae9a4d892911a3e4b3efb88576a90fdfbb7eadf4ae1879169c"
	I1115 10:00:34.183200  610586 cri.go:89] found id: "8e7e9bd77bc1f1f89b796930001c5a1902359d0cf7e181bc548e5bc2a4ee0988"
	I1115 10:00:34.183204  610586 cri.go:89] found id: "4f36f52df9e1823d0f8b7fcb1bd85954b910702e8d94abe040010ef7749c5652"
	I1115 10:00:34.183207  610586 cri.go:89] found id: "bf66c3337cc33c38b50cd84c0408339ca358893b510f2a8a1222686d78ed613c"
	I1115 10:00:34.183211  610586 cri.go:89] found id: "b1fb5f089cc60d72969c503d4ac81cc9dad2cb2197b8fdf047b094dd5609c21c"
	I1115 10:00:34.183219  610586 cri.go:89] found id: "9da1621ca096e17a0f14c287b16cae84d8be8bc11dc4c323425e15c1db1d75d7"
	I1115 10:00:34.183223  610586 cri.go:89] found id: "1b0d120a2950f97fb086bc8728a2fc50b1cc4017835ef14197769a9e88ee301b"
	I1115 10:00:34.183227  610586 cri.go:89] found id: ""
	I1115 10:00:34.183270  610586 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:00:34.195120  610586 retry.go:31] will retry after 384.609416ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:00:34Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:00:34.580608  610586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:00:34.596292  610586 pause.go:52] kubelet running: false
	I1115 10:00:34.596358  610586 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:00:34.778012  610586 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:00:34.778113  610586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:00:34.869224  610586 cri.go:89] found id: "a29f139a8109040ea93e6b686169546ab3f7572e5964c616ddfe4b0109c18e09"
	I1115 10:00:34.869256  610586 cri.go:89] found id: "ac30ed88cef6c844e51a8a22ea8e22de89811d932dce6e1c6e5cc0b93c9e14b2"
	I1115 10:00:34.869262  610586 cri.go:89] found id: "831cf76b7844ee6e290663629081fd160f5eee162570c153fa316a7695614da3"
	I1115 10:00:34.869277  610586 cri.go:89] found id: "b2da0d5358c4a17df789e6829fc9570ec901a6648cf6554feac6498f10accaa1"
	I1115 10:00:34.869281  610586 cri.go:89] found id: "766f51f768df62ae9a4d892911a3e4b3efb88576a90fdfbb7eadf4ae1879169c"
	I1115 10:00:34.869286  610586 cri.go:89] found id: "8e7e9bd77bc1f1f89b796930001c5a1902359d0cf7e181bc548e5bc2a4ee0988"
	I1115 10:00:34.869290  610586 cri.go:89] found id: "4f36f52df9e1823d0f8b7fcb1bd85954b910702e8d94abe040010ef7749c5652"
	I1115 10:00:34.869294  610586 cri.go:89] found id: "bf66c3337cc33c38b50cd84c0408339ca358893b510f2a8a1222686d78ed613c"
	I1115 10:00:34.869298  610586 cri.go:89] found id: "b1fb5f089cc60d72969c503d4ac81cc9dad2cb2197b8fdf047b094dd5609c21c"
	I1115 10:00:34.869306  610586 cri.go:89] found id: "9da1621ca096e17a0f14c287b16cae84d8be8bc11dc4c323425e15c1db1d75d7"
	I1115 10:00:34.869322  610586 cri.go:89] found id: "1b0d120a2950f97fb086bc8728a2fc50b1cc4017835ef14197769a9e88ee301b"
	I1115 10:00:34.869326  610586 cri.go:89] found id: ""
	I1115 10:00:34.869377  610586 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:00:34.885631  610586 out.go:203] 
	W1115 10:00:34.888439  610586 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:00:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:00:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:00:34.888462  610586 out.go:285] * 
	* 
	W1115 10:00:34.895124  610586 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:00:34.896437  610586 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-335655 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-335655
helpers_test.go:243: (dbg) docker inspect old-k8s-version-335655:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482",
	        "Created": "2025-11-15T09:58:23.178019961Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 600217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:59:35.475920305Z",
	            "FinishedAt": "2025-11-15T09:59:34.549290842Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482/hostname",
	        "HostsPath": "/var/lib/docker/containers/e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482/hosts",
	        "LogPath": "/var/lib/docker/containers/e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482/e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482-json.log",
	        "Name": "/old-k8s-version-335655",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-335655:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-335655",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482",
	                "LowerDir": "/var/lib/docker/overlay2/511bf1a954888ba81e4e64e727b739994a85683cfd70df622078393659c03bfa-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/511bf1a954888ba81e4e64e727b739994a85683cfd70df622078393659c03bfa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/511bf1a954888ba81e4e64e727b739994a85683cfd70df622078393659c03bfa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/511bf1a954888ba81e4e64e727b739994a85683cfd70df622078393659c03bfa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-335655",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-335655/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-335655",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-335655",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-335655",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4f9181f9e2b158e2da877f76455f6df83441c94c111397d330ec15102306170d",
	            "SandboxKey": "/var/run/docker/netns/4f9181f9e2b1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-335655": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5f22abf6c460e469b71da8d9c04b0cc70f79b863fc7fb95c973cc15281dd62ec",
	                    "EndpointID": "cace58f03fa557d78b4d456d1eefffa03600add9ace64e163e442f5df227031d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "1e:9d:47:c8:31:9c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-335655",
	                        "e7381b09c1c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335655 -n old-k8s-version-335655
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335655 -n old-k8s-version-335655: exit status 2 (354.536272ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-335655 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-335655 logs -n 25: (1.194294412s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-flag-896620                                                                                                                                                                                                                  │ force-systemd-flag-896620 │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p cert-options-759344 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ stop    │ -p NoKubernetes-941483                                                                                                                                                                                                                        │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p NoKubernetes-941483 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ ssh     │ -p NoKubernetes-941483 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │                     │
	│ delete  │ -p NoKubernetes-941483                                                                                                                                                                                                                        │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p old-k8s-version-335655 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:59 UTC │
	│ ssh     │ cert-options-759344 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ ssh     │ -p cert-options-759344 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ delete  │ -p cert-options-759344                                                                                                                                                                                                                        │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-559401         │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:59 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-335655 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │                     │
	│ stop    │ -p old-k8s-version-335655 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-559401 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-559401         │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │                     │
	│ stop    │ -p no-preload-559401 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-559401         │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-335655 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ start   │ -p old-k8s-version-335655 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 10:00 UTC │
	│ addons  │ enable dashboard -p no-preload-559401 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-559401         │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ start   │ -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-559401         │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-405833 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ start   │ -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-405833 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p kubernetes-upgrade-405833                                                                                                                                                                                                                  │ kubernetes-upgrade-405833 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p embed-certs-430513 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-430513        │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ image   │ old-k8s-version-335655 image list --format=json                                                                                                                                                                                               │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ pause   │ -p old-k8s-version-335655 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:00:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:00:15.708610  608059 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:00:15.708779  608059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:00:15.708791  608059 out.go:374] Setting ErrFile to fd 2...
	I1115 10:00:15.708798  608059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:00:15.709098  608059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:00:15.709669  608059 out.go:368] Setting JSON to false
	I1115 10:00:15.710855  608059 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6157,"bootTime":1763194659,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:00:15.710912  608059 start.go:143] virtualization: kvm guest
	I1115 10:00:15.712856  608059 out.go:179] * [embed-certs-430513] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:00:15.714277  608059 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:00:15.714349  608059 notify.go:221] Checking for updates...
	I1115 10:00:15.717017  608059 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:00:15.718364  608059 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:00:15.719714  608059 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 10:00:15.723569  608059 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:00:15.724770  608059 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:00:15.726349  608059 config.go:182] Loaded profile config "cert-expiration-341243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:00:15.726491  608059 config.go:182] Loaded profile config "no-preload-559401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:00:15.726613  608059 config.go:182] Loaded profile config "old-k8s-version-335655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:00:15.726735  608059 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:00:15.753551  608059 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:00:15.753732  608059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:00:15.813899  608059 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:00:15.803755416 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:00:15.814001  608059 docker.go:319] overlay module found
	I1115 10:00:15.815838  608059 out.go:179] * Using the docker driver based on user configuration
	I1115 10:00:15.817196  608059 start.go:309] selected driver: docker
	I1115 10:00:15.817213  608059 start.go:930] validating driver "docker" against <nil>
	I1115 10:00:15.817228  608059 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:00:15.818017  608059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:00:15.880976  608059 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:00:15.870477014 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:00:15.881150  608059 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:00:15.881366  608059 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:00:15.883094  608059 out.go:179] * Using Docker driver with root privileges
	I1115 10:00:15.884301  608059 cni.go:84] Creating CNI manager for ""
	I1115 10:00:15.884373  608059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:00:15.884388  608059 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:00:15.884469  608059 start.go:353] cluster config:
	{Name:embed-certs-430513 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:00:15.885777  608059 out.go:179] * Starting "embed-certs-430513" primary control-plane node in "embed-certs-430513" cluster
	I1115 10:00:15.886890  608059 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:00:15.888287  608059 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:00:15.889743  608059 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:00:15.889787  608059 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:00:15.889823  608059 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:00:15.889841  608059 cache.go:65] Caching tarball of preloaded images
	I1115 10:00:15.889925  608059 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:00:15.889938  608059 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:00:15.890031  608059 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/config.json ...
	I1115 10:00:15.890049  608059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/config.json: {Name:mk89b01ce76928fbbfb611abf2c1b13ff91226bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:15.911596  608059 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:00:15.911628  608059 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:00:15.911649  608059 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:00:15.911689  608059 start.go:360] acquireMachinesLock for embed-certs-430513: {Name:mk23e9dcdc23745b328473e6d9e82c519bc86048 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:00:15.911808  608059 start.go:364] duration metric: took 95.323µs to acquireMachinesLock for "embed-certs-430513"
	I1115 10:00:15.911843  608059 start.go:93] Provisioning new machine with config: &{Name:embed-certs-430513 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:00:15.911951  608059 start.go:125] createHost starting for "" (driver="docker")
	W1115 10:00:16.686512  603112 pod_ready.go:104] pod "coredns-66bc5c9577-dh55n" is not "Ready", error: <nil>
	W1115 10:00:18.687172  603112 pod_ready.go:104] pod "coredns-66bc5c9577-dh55n" is not "Ready", error: <nil>
	W1115 10:00:15.791584  599971 pod_ready.go:104] pod "coredns-5dd5756b68-j8hqh" is not "Ready", error: <nil>
	W1115 10:00:18.291021  599971 pod_ready.go:104] pod "coredns-5dd5756b68-j8hqh" is not "Ready", error: <nil>
	I1115 10:00:20.017697  599971 pod_ready.go:94] pod "coredns-5dd5756b68-j8hqh" is "Ready"
	I1115 10:00:20.017729  599971 pod_ready.go:86] duration metric: took 33.732797509s for pod "coredns-5dd5756b68-j8hqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.021145  599971 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.228598  599971 pod_ready.go:94] pod "etcd-old-k8s-version-335655" is "Ready"
	I1115 10:00:20.228629  599971 pod_ready.go:86] duration metric: took 207.456823ms for pod "etcd-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.231416  599971 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.236192  599971 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-335655" is "Ready"
	I1115 10:00:20.236214  599971 pod_ready.go:86] duration metric: took 4.77503ms for pod "kube-apiserver-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:15.914077  608059 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:00:15.914357  608059 start.go:159] libmachine.API.Create for "embed-certs-430513" (driver="docker")
	I1115 10:00:15.914427  608059 client.go:173] LocalClient.Create starting
	I1115 10:00:15.914527  608059 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem
	I1115 10:00:15.914571  608059 main.go:143] libmachine: Decoding PEM data...
	I1115 10:00:15.914596  608059 main.go:143] libmachine: Parsing certificate...
	I1115 10:00:15.914687  608059 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem
	I1115 10:00:15.914720  608059 main.go:143] libmachine: Decoding PEM data...
	I1115 10:00:15.914737  608059 main.go:143] libmachine: Parsing certificate...
	I1115 10:00:15.915235  608059 cli_runner.go:164] Run: docker network inspect embed-certs-430513 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:00:15.933734  608059 cli_runner.go:211] docker network inspect embed-certs-430513 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:00:15.933816  608059 network_create.go:284] running [docker network inspect embed-certs-430513] to gather additional debugging logs...
	I1115 10:00:15.933836  608059 cli_runner.go:164] Run: docker network inspect embed-certs-430513
	W1115 10:00:15.951772  608059 cli_runner.go:211] docker network inspect embed-certs-430513 returned with exit code 1
	I1115 10:00:15.951803  608059 network_create.go:287] error running [docker network inspect embed-certs-430513]: docker network inspect embed-certs-430513: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-430513 not found
	I1115 10:00:15.951820  608059 network_create.go:289] output of [docker network inspect embed-certs-430513]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-430513 not found
	
	** /stderr **
	I1115 10:00:15.951950  608059 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:00:15.970811  608059 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a8fb985664d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:ab:70:dd:9f:65} reservation:<nil>}
	I1115 10:00:15.971720  608059 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cc9c79f9c19e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:9a:52:90:2e:14} reservation:<nil>}
	I1115 10:00:15.972269  608059 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-309565720ebf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:66:38:13:6a:5d} reservation:<nil>}
	I1115 10:00:15.973282  608059 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001daad40}
	I1115 10:00:15.973308  608059 network_create.go:124] attempt to create docker network embed-certs-430513 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1115 10:00:15.973370  608059 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-430513 embed-certs-430513
	I1115 10:00:16.025651  608059 network_create.go:108] docker network embed-certs-430513 192.168.76.0/24 created
	I1115 10:00:16.025700  608059 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-430513" container
	I1115 10:00:16.025774  608059 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:00:16.045102  608059 cli_runner.go:164] Run: docker volume create embed-certs-430513 --label name.minikube.sigs.k8s.io=embed-certs-430513 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:00:16.064668  608059 oci.go:103] Successfully created a docker volume embed-certs-430513
	I1115 10:00:16.064767  608059 cli_runner.go:164] Run: docker run --rm --name embed-certs-430513-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-430513 --entrypoint /usr/bin/test -v embed-certs-430513:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:00:16.494238  608059 oci.go:107] Successfully prepared a docker volume embed-certs-430513
	I1115 10:00:16.494315  608059 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:00:16.494328  608059 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:00:16.494410  608059 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-430513:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:00:20.238943  599971 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.243717  599971 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-335655" is "Ready"
	I1115 10:00:20.243742  599971 pod_ready.go:86] duration metric: took 4.776276ms for pod "kube-controller-manager-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.246138  599971 pod_ready.go:83] waiting for pod "kube-proxy-ndp6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.618258  599971 pod_ready.go:94] pod "kube-proxy-ndp6f" is "Ready"
	I1115 10:00:20.618286  599971 pod_ready.go:86] duration metric: took 372.130031ms for pod "kube-proxy-ndp6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.818277  599971 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:21.217877  599971 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-335655" is "Ready"
	I1115 10:00:21.217910  599971 pod_ready.go:86] duration metric: took 399.603947ms for pod "kube-scheduler-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:21.217923  599971 pod_ready.go:40] duration metric: took 34.937944844s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:00:21.267153  599971 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1115 10:00:21.269123  599971 out.go:203] 
	W1115 10:00:21.270236  599971 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1115 10:00:21.271264  599971 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1115 10:00:21.272335  599971 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-335655" cluster and "default" namespace by default
	W1115 10:00:21.186510  603112 pod_ready.go:104] pod "coredns-66bc5c9577-dh55n" is not "Ready", error: <nil>
	W1115 10:00:23.685284  603112 pod_ready.go:104] pod "coredns-66bc5c9577-dh55n" is not "Ready", error: <nil>
	I1115 10:00:20.937303  608059 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-430513:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.442842168s)
	I1115 10:00:20.937342  608059 kic.go:203] duration metric: took 4.443007965s to extract preloaded images to volume ...
	W1115 10:00:20.937489  608059 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1115 10:00:20.937548  608059 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1115 10:00:20.937596  608059 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:00:20.996296  608059 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-430513 --name embed-certs-430513 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-430513 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-430513 --network embed-certs-430513 --ip 192.168.76.2 --volume embed-certs-430513:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:00:21.316008  608059 cli_runner.go:164] Run: docker container inspect embed-certs-430513 --format={{.State.Running}}
	I1115 10:00:21.338089  608059 cli_runner.go:164] Run: docker container inspect embed-certs-430513 --format={{.State.Status}}
	I1115 10:00:21.358812  608059 cli_runner.go:164] Run: docker exec embed-certs-430513 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:00:21.413243  608059 oci.go:144] the created container "embed-certs-430513" has a running status.
	I1115 10:00:21.413320  608059 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa...
	I1115 10:00:22.057988  608059 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:00:22.084243  608059 cli_runner.go:164] Run: docker container inspect embed-certs-430513 --format={{.State.Status}}
	I1115 10:00:22.103850  608059 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:00:22.103871  608059 kic_runner.go:114] Args: [docker exec --privileged embed-certs-430513 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:00:22.155819  608059 cli_runner.go:164] Run: docker container inspect embed-certs-430513 --format={{.State.Status}}
	I1115 10:00:22.174488  608059 machine.go:94] provisionDockerMachine start ...
	I1115 10:00:22.174623  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:22.193449  608059 main.go:143] libmachine: Using SSH client type: native
	I1115 10:00:22.193742  608059 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1115 10:00:22.193760  608059 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:00:22.323704  608059 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-430513
	
	I1115 10:00:22.323737  608059 ubuntu.go:182] provisioning hostname "embed-certs-430513"
	I1115 10:00:22.323807  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:22.342280  608059 main.go:143] libmachine: Using SSH client type: native
	I1115 10:00:22.342553  608059 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1115 10:00:22.342571  608059 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-430513 && echo "embed-certs-430513" | sudo tee /etc/hostname
	I1115 10:00:22.480669  608059 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-430513
	
	I1115 10:00:22.480750  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:22.498628  608059 main.go:143] libmachine: Using SSH client type: native
	I1115 10:00:22.498868  608059 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1115 10:00:22.498895  608059 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-430513' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-430513/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-430513' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:00:22.627263  608059 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:00:22.627294  608059 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 10:00:22.627326  608059 ubuntu.go:190] setting up certificates
	I1115 10:00:22.627338  608059 provision.go:84] configureAuth start
	I1115 10:00:22.627424  608059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-430513
	I1115 10:00:22.647603  608059 provision.go:143] copyHostCerts
	I1115 10:00:22.647684  608059 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 10:00:22.647702  608059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 10:00:22.647796  608059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 10:00:22.647973  608059 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 10:00:22.647984  608059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 10:00:22.648029  608059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 10:00:22.648135  608059 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 10:00:22.648147  608059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 10:00:22.648190  608059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 10:00:22.648280  608059 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.embed-certs-430513 san=[127.0.0.1 192.168.76.2 embed-certs-430513 localhost minikube]
	I1115 10:00:23.554592  608059 provision.go:177] copyRemoteCerts
	I1115 10:00:23.554665  608059 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:00:23.554705  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:23.573297  608059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:00:23.669362  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:00:23.690532  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1115 10:00:23.709789  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:00:23.728707  608059 provision.go:87] duration metric: took 1.101353214s to configureAuth
	I1115 10:00:23.728737  608059 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:00:23.728924  608059 config.go:182] Loaded profile config "embed-certs-430513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:00:23.729041  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:23.748048  608059 main.go:143] libmachine: Using SSH client type: native
	I1115 10:00:23.748347  608059 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1115 10:00:23.748366  608059 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:00:23.995027  608059 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:00:23.995059  608059 machine.go:97] duration metric: took 1.820544085s to provisionDockerMachine
	I1115 10:00:23.995072  608059 client.go:176] duration metric: took 8.0806338s to LocalClient.Create
	I1115 10:00:23.995100  608059 start.go:167] duration metric: took 8.080741754s to libmachine.API.Create "embed-certs-430513"
	I1115 10:00:23.995112  608059 start.go:293] postStartSetup for "embed-certs-430513" (driver="docker")
	I1115 10:00:23.995125  608059 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:00:23.995181  608059 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:00:23.995218  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:24.013672  608059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:00:24.110197  608059 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:00:24.113800  608059 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:00:24.113839  608059 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:00:24.113853  608059 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 10:00:24.113909  608059 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 10:00:24.114000  608059 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 10:00:24.114119  608059 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:00:24.121885  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:00:24.142235  608059 start.go:296] duration metric: took 147.103643ms for postStartSetup
	I1115 10:00:24.142620  608059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-430513
	I1115 10:00:24.161798  608059 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/config.json ...
	I1115 10:00:24.162079  608059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:00:24.162129  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:24.180084  608059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:00:24.271622  608059 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:00:24.276270  608059 start.go:128] duration metric: took 8.364301752s to createHost
	I1115 10:00:24.276298  608059 start.go:83] releasing machines lock for "embed-certs-430513", held for 8.364472329s
	I1115 10:00:24.276373  608059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-430513
	I1115 10:00:24.294034  608059 ssh_runner.go:195] Run: cat /version.json
	I1115 10:00:24.294073  608059 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:00:24.294087  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:24.294133  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:24.313667  608059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:00:24.314036  608059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:00:24.403541  608059 ssh_runner.go:195] Run: systemctl --version
	I1115 10:00:24.458025  608059 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:00:24.493534  608059 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:00:24.498309  608059 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:00:24.498375  608059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:00:24.525747  608059 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:00:24.525776  608059 start.go:496] detecting cgroup driver to use...
	I1115 10:00:24.525811  608059 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 10:00:24.525861  608059 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:00:24.542854  608059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:00:24.556777  608059 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:00:24.556841  608059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:00:24.574640  608059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:00:24.593466  608059 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:00:24.688945  608059 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:00:24.790167  608059 docker.go:234] disabling docker service ...
	I1115 10:00:24.790241  608059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:00:24.809552  608059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:00:24.823108  608059 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:00:24.910944  608059 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:00:24.997605  608059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:00:25.010605  608059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:00:25.025277  608059 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:00:25.025332  608059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:00:25.036344  608059 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 10:00:25.036438  608059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:00:25.045991  608059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:00:25.055103  608059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:00:25.064486  608059 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:00:25.072778  608059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:00:25.081502  608059 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:00:25.096171  608059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:00:25.105407  608059 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:00:25.113841  608059 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:00:25.121252  608059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:00:25.207550  608059 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:00:25.308517  608059 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:00:25.308592  608059 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:00:25.312429  608059 start.go:564] Will wait 60s for crictl version
	I1115 10:00:25.312491  608059 ssh_runner.go:195] Run: which crictl
	I1115 10:00:25.316035  608059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:00:25.339974  608059 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:00:25.340046  608059 ssh_runner.go:195] Run: crio --version
	I1115 10:00:25.368492  608059 ssh_runner.go:195] Run: crio --version
	I1115 10:00:25.398381  608059 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:00:25.399561  608059 cli_runner.go:164] Run: docker network inspect embed-certs-430513 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:00:25.417461  608059 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:00:25.421561  608059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:00:25.431735  608059 kubeadm.go:884] updating cluster {Name:embed-certs-430513 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:00:25.431862  608059 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:00:25.431911  608059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:00:25.463605  608059 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:00:25.463627  608059 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:00:25.463673  608059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:00:25.488815  608059 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:00:25.488840  608059 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:00:25.488848  608059 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:00:25.488935  608059 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-430513 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:00:25.489000  608059 ssh_runner.go:195] Run: crio config
	I1115 10:00:25.536667  608059 cni.go:84] Creating CNI manager for ""
	I1115 10:00:25.536690  608059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:00:25.536707  608059 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:00:25.536727  608059 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-430513 NodeName:embed-certs-430513 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:00:25.536847  608059 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-430513"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:00:25.536924  608059 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:00:25.545726  608059 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:00:25.545801  608059 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:00:25.554377  608059 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:00:25.567263  608059 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:00:25.582981  608059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1115 10:00:25.596385  608059 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:00:25.600345  608059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:00:25.610171  608059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:00:25.693555  608059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:00:25.718140  608059 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513 for IP: 192.168.76.2
	I1115 10:00:25.718166  608059 certs.go:195] generating shared ca certs ...
	I1115 10:00:25.718183  608059 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:25.718313  608059 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 10:00:25.718352  608059 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 10:00:25.718364  608059 certs.go:257] generating profile certs ...
	I1115 10:00:25.718453  608059 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/client.key
	I1115 10:00:25.718482  608059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/client.crt with IP's: []
	I1115 10:00:26.003100  608059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/client.crt ...
	I1115 10:00:26.003130  608059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/client.crt: {Name:mkb008b092b0f5082d52920a5c4e51fed899848a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:26.003341  608059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/client.key ...
	I1115 10:00:26.003359  608059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/client.key: {Name:mke0cc7d4d5a62cc74c4376c34c7bd81d9e66b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:26.003513  608059 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.key.866022bc
	I1115 10:00:26.003535  608059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.crt.866022bc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1115 10:00:26.060146  608059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.crt.866022bc ...
	I1115 10:00:26.060181  608059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.crt.866022bc: {Name:mk98203d099698eabd8febb1d6a468744cdc7f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:26.060427  608059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.key.866022bc ...
	I1115 10:00:26.060465  608059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.key.866022bc: {Name:mk447eeb36b8cf0dcbd09f968a274822e2f6fe1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:26.060588  608059 certs.go:382] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.crt.866022bc -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.crt
	I1115 10:00:26.060724  608059 certs.go:386] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.key.866022bc -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.key
	I1115 10:00:26.060821  608059 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.key
	I1115 10:00:26.060846  608059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.crt with IP's: []
	I1115 10:00:26.326074  608059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.crt ...
	I1115 10:00:26.326106  608059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.crt: {Name:mkaf88efb27c2b61bc2261a5617b43f435eb6639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:26.326312  608059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.key ...
	I1115 10:00:26.326331  608059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.key: {Name:mka8787fffeb76423c9f58f8f91426a77ea1cb45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:26.326579  608059 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 10:00:26.326627  608059 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 10:00:26.326644  608059 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:00:26.326677  608059 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:00:26.326709  608059 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:00:26.326741  608059 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 10:00:26.326796  608059 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:00:26.327406  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:00:26.345995  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:00:26.363310  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:00:26.380995  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:00:26.398169  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1115 10:00:26.415294  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:00:26.432304  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:00:26.449701  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:00:26.466737  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 10:00:26.486838  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 10:00:26.505142  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:00:26.523118  608059 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:00:26.536239  608059 ssh_runner.go:195] Run: openssl version
	I1115 10:00:26.542880  608059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 10:00:26.552341  608059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 10:00:26.556844  608059 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 10:00:26.556908  608059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 10:00:26.592710  608059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 10:00:26.601778  608059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 10:00:26.610370  608059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 10:00:26.614363  608059 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 10:00:26.614440  608059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 10:00:26.653659  608059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:00:26.664241  608059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:00:26.673375  608059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:00:26.678176  608059 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:00:26.678242  608059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:00:26.715944  608059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:00:26.725194  608059 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:00:26.728972  608059 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:00:26.729041  608059 kubeadm.go:401] StartCluster: {Name:embed-certs-430513 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:00:26.729112  608059 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:00:26.729178  608059 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:00:26.758173  608059 cri.go:89] found id: ""
	I1115 10:00:26.758240  608059 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:00:26.766619  608059 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:00:26.774716  608059 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:00:26.774783  608059 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:00:26.783166  608059 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:00:26.783187  608059 kubeadm.go:158] found existing configuration files:
	
	I1115 10:00:26.783230  608059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:00:26.790828  608059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:00:26.790897  608059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:00:26.798419  608059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:00:26.805865  608059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:00:26.805924  608059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:00:26.813233  608059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:00:26.821176  608059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:00:26.821240  608059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:00:26.828771  608059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:00:26.836267  608059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:00:26.836320  608059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:00:26.843702  608059 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:00:26.882311  608059 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:00:26.883042  608059 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:00:26.903704  608059 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:00:26.903800  608059 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:00:26.903842  608059 kubeadm.go:319] OS: Linux
	I1115 10:00:26.903918  608059 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:00:26.904018  608059 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:00:26.904096  608059 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:00:26.904170  608059 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:00:26.904235  608059 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:00:26.904293  608059 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:00:26.904343  608059 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:00:26.904430  608059 kubeadm.go:319] CGROUPS_IO: enabled
	I1115 10:00:26.966935  608059 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:00:26.967094  608059 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:00:26.967243  608059 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:00:26.974633  608059 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1115 10:00:25.686292  603112 pod_ready.go:104] pod "coredns-66bc5c9577-dh55n" is not "Ready", error: <nil>
	W1115 10:00:28.185579  603112 pod_ready.go:104] pod "coredns-66bc5c9577-dh55n" is not "Ready", error: <nil>
	I1115 10:00:26.976940  608059 out.go:252]   - Generating certificates and keys ...
	I1115 10:00:26.977040  608059 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:00:26.977118  608059 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:00:27.027586  608059 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:00:27.458429  608059 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:00:27.915514  608059 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:00:28.215219  608059 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:00:28.375535  608059 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:00:28.375686  608059 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-430513 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 10:00:28.585842  608059 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:00:28.585985  608059 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-430513 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 10:00:29.063025  608059 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:00:29.134290  608059 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:00:29.232729  608059 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:00:29.232870  608059 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:00:29.519986  608059 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:00:29.625584  608059 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:00:30.340903  608059 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:00:30.591665  608059 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:00:30.864108  608059 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:00:30.864730  608059 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:00:30.869146  608059 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1115 10:00:30.186256  603112 pod_ready.go:104] pod "coredns-66bc5c9577-dh55n" is not "Ready", error: <nil>
	I1115 10:00:32.185794  603112 pod_ready.go:94] pod "coredns-66bc5c9577-dh55n" is "Ready"
	I1115 10:00:32.185826  603112 pod_ready.go:86] duration metric: took 31.505696726s for pod "coredns-66bc5c9577-dh55n" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.188464  603112 pod_ready.go:83] waiting for pod "etcd-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.192317  603112 pod_ready.go:94] pod "etcd-no-preload-559401" is "Ready"
	I1115 10:00:32.192346  603112 pod_ready.go:86] duration metric: took 3.858836ms for pod "etcd-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.194907  603112 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.198783  603112 pod_ready.go:94] pod "kube-apiserver-no-preload-559401" is "Ready"
	I1115 10:00:32.198807  603112 pod_ready.go:86] duration metric: took 3.876948ms for pod "kube-apiserver-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.200807  603112 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.384127  603112 pod_ready.go:94] pod "kube-controller-manager-no-preload-559401" is "Ready"
	I1115 10:00:32.384156  603112 pod_ready.go:86] duration metric: took 183.328238ms for pod "kube-controller-manager-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.584252  603112 pod_ready.go:83] waiting for pod "kube-proxy-sbk5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.984061  603112 pod_ready.go:94] pod "kube-proxy-sbk5r" is "Ready"
	I1115 10:00:32.984093  603112 pod_ready.go:86] duration metric: took 399.809892ms for pod "kube-proxy-sbk5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:33.184554  603112 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:33.586136  603112 pod_ready.go:94] pod "kube-scheduler-no-preload-559401" is "Ready"
	I1115 10:00:33.586168  603112 pod_ready.go:86] duration metric: took 401.589365ms for pod "kube-scheduler-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:33.586182  603112 pod_ready.go:40] duration metric: took 32.911878851s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:00:33.643015  603112 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:00:33.644942  603112 out.go:179] * Done! kubectl is now configured to use "no-preload-559401" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:00:06 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:06.109418729Z" level=info msg="Started container" PID=1720 containerID=1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv/dashboard-metrics-scraper id=cff55503-8be1-4639-937b-1d2e9748276b name=/runtime.v1.RuntimeService/StartContainer sandboxID=73ed87c5251d9adbd2558562cb7cbfe8be871ee4d281d377b1086ae27cde8b4e
	Nov 15 10:00:07 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:07.068921459Z" level=info msg="Removing container: 969ea3bdbf52750bc7261b23a17f1a33a85d43a35b215980722a4853ae83085b" id=33413fb2-ba2f-4973-a46e-9e9df3148370 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:00:07 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:07.154192569Z" level=info msg="Removed container 969ea3bdbf52750bc7261b23a17f1a33a85d43a35b215980722a4853ae83085b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv/dashboard-metrics-scraper" id=33413fb2-ba2f-4973-a46e-9e9df3148370 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.082865994Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d372afb0-9eb2-4787-9990-48e92bd0d328 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.083886505Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=df4eef44-1de2-4f05-8f95-6c60f3617778 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.08567302Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d702fe53-521f-4533-9335-77a36f59178e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.085833389Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.092418618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.092816315Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/175e54009fa23c086f64f7a23e7914e9164cb78eecbe1be635688b722661227b/merged/etc/passwd: no such file or directory"
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.092951312Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/175e54009fa23c086f64f7a23e7914e9164cb78eecbe1be635688b722661227b/merged/etc/group: no such file or directory"
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.093364574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.123538112Z" level=info msg="Created container a29f139a8109040ea93e6b686169546ab3f7572e5964c616ddfe4b0109c18e09: kube-system/storage-provisioner/storage-provisioner" id=d702fe53-521f-4533-9335-77a36f59178e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.124131462Z" level=info msg="Starting container: a29f139a8109040ea93e6b686169546ab3f7572e5964c616ddfe4b0109c18e09" id=91c8a957-6ed7-4a68-a0af-720bcc1ae0c1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.126076619Z" level=info msg="Started container" PID=1734 containerID=a29f139a8109040ea93e6b686169546ab3f7572e5964c616ddfe4b0109c18e09 description=kube-system/storage-provisioner/storage-provisioner id=91c8a957-6ed7-4a68-a0af-720bcc1ae0c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7838aa7b61c7723f6ef4e7106bd211f5c7abedc00f563fe8e55a5cf2323df99b
	Nov 15 10:00:22 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:22.973226209Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=df59ef88-3f5c-451f-98ce-e14fe68b5ac6 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:22 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:22.974352425Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=751450b0-8c2e-4e24-9da6-4b1d62c23f41 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:22 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:22.975372124Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv/dashboard-metrics-scraper" id=e3879e8a-5057-484f-a30f-658862d97428 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:22 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:22.975558293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:22 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:22.982037666Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:22 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:22.98252739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:23 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:23.021917811Z" level=info msg="Created container 9da1621ca096e17a0f14c287b16cae84d8be8bc11dc4c323425e15c1db1d75d7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv/dashboard-metrics-scraper" id=e3879e8a-5057-484f-a30f-658862d97428 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:23 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:23.022648891Z" level=info msg="Starting container: 9da1621ca096e17a0f14c287b16cae84d8be8bc11dc4c323425e15c1db1d75d7" id=bd3327f7-79e5-4ef4-8aac-aa21c9f7faf3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:00:23 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:23.024414167Z" level=info msg="Started container" PID=1771 containerID=9da1621ca096e17a0f14c287b16cae84d8be8bc11dc4c323425e15c1db1d75d7 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv/dashboard-metrics-scraper id=bd3327f7-79e5-4ef4-8aac-aa21c9f7faf3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=73ed87c5251d9adbd2558562cb7cbfe8be871ee4d281d377b1086ae27cde8b4e
	Nov 15 10:00:23 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:23.105264168Z" level=info msg="Removing container: 1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91" id=9b824da6-5da8-47f5-83c1-dd48dd1489b1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:00:23 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:23.114588002Z" level=info msg="Removed container 1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv/dashboard-metrics-scraper" id=9b824da6-5da8-47f5-83c1-dd48dd1489b1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	9da1621ca096e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   2                   73ed87c5251d9       dashboard-metrics-scraper-5f989dc9cf-kplsv       kubernetes-dashboard
	a29f139a81090       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   7838aa7b61c77       storage-provisioner                              kube-system
	1b0d120a2950f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   33 seconds ago      Running             kubernetes-dashboard        0                   ffea7eb875bda       kubernetes-dashboard-8694d4445c-5wmkv            kubernetes-dashboard
	f3b03ece12827       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   d1a113afc114c       busybox                                          default
	ac30ed88cef6c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           50 seconds ago      Running             coredns                     0                   b413c97243b20       coredns-5dd5756b68-j8hqh                         kube-system
	831cf76b7844e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   7838aa7b61c77       storage-provisioner                              kube-system
	b2da0d5358c4a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   52e13ca3e88cf       kindnet-w52sl                                    kube-system
	766f51f768df6       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           50 seconds ago      Running             kube-proxy                  0                   f5eb2ad4ed13d       kube-proxy-ndp6f                                 kube-system
	8e7e9bd77bc1f       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           53 seconds ago      Running             kube-apiserver              0                   0f54db842218c       kube-apiserver-old-k8s-version-335655            kube-system
	4f36f52df9e18       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           53 seconds ago      Running             kube-controller-manager     0                   24999571be02a       kube-controller-manager-old-k8s-version-335655   kube-system
	bf66c3337cc33       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           53 seconds ago      Running             etcd                        0                   b9922c3fc040e       etcd-old-k8s-version-335655                      kube-system
	b1fb5f089cc60       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           53 seconds ago      Running             kube-scheduler              0                   445618229cebb       kube-scheduler-old-k8s-version-335655            kube-system
	
	
	==> coredns [ac30ed88cef6c844e51a8a22ea8e22de89811d932dce6e1c6e5cc0b93c9e14b2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58980 - 16494 "HINFO IN 7796113168830110105.6640341912193825824. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065210085s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-335655
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-335655
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=old-k8s-version-335655
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_58_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:58:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-335655
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:00:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:00:15 +0000   Sat, 15 Nov 2025 09:58:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:00:15 +0000   Sat, 15 Nov 2025 09:58:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:00:15 +0000   Sat, 15 Nov 2025 09:58:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:00:15 +0000   Sat, 15 Nov 2025 09:59:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-335655
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                4f251d42-f2ea-4cb6-8ff2-c94beae7a0fe
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-j8hqh                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-old-k8s-version-335655                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-w52sl                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-335655             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-335655    200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-ndp6f                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-335655             100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-kplsv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-5wmkv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-335655 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node old-k8s-version-335655 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node old-k8s-version-335655 event: Registered Node old-k8s-version-335655 in Controller
	  Normal  NodeReady                93s                  kubelet          Node old-k8s-version-335655 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)    kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)    kubelet          Node old-k8s-version-335655 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 55s)    kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                  node-controller  Node old-k8s-version-335655 event: Registered Node old-k8s-version-335655 in Controller
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [bf66c3337cc33c38b50cd84c0408339ca358893b510f2a8a1222686d78ed613c] <==
	{"level":"info","ts":"2025-11-15T09:59:43.744848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-15T09:59:43.744857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-15T09:59:43.744864Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-15T09:59:43.745832Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-335655 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-15T09:59:43.745845Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T09:59:43.745876Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T09:59:43.746096Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-15T09:59:43.746118Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-15T09:59:43.746982Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-15T09:59:43.747037Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-15T10:00:19.675872Z","caller":"traceutil/trace.go:171","msg":"trace[1295718749] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"224.228382ms","start":"2025-11-15T10:00:19.451615Z","end":"2025-11-15T10:00:19.675844Z","steps":["trace[1295718749] 'process raft request'  (duration: 223.951982ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:00:20.012368Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.075165ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597074986390086 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-2q5z9\" mod_revision:569 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-2q5z9\" value_size:1259 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-2q5z9\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:00:20.012797Z","caller":"traceutil/trace.go:171","msg":"trace[1463470652] linearizableReadLoop","detail":"{readStateIndex:678; appliedIndex:676; }","duration":"225.904734ms","start":"2025-11-15T10:00:19.786874Z","end":"2025-11-15T10:00:20.012779Z","steps":["trace[1463470652] 'read index received'  (duration: 86.837318ms)","trace[1463470652] 'applied index is now lower than readState.Index'  (duration: 139.066499ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:00:20.01297Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.105223ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-j8hqh\" ","response":"range_response_count:1 size:4813"}
	{"level":"info","ts":"2025-11-15T10:00:20.013105Z","caller":"traceutil/trace.go:171","msg":"trace[388665250] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-j8hqh; range_end:; response_count:1; response_revision:646; }","duration":"226.216094ms","start":"2025-11-15T10:00:19.786847Z","end":"2025-11-15T10:00:20.013063Z","steps":["trace[388665250] 'agreement among raft nodes before linearized reading'  (duration: 226.014069ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:00:20.013162Z","caller":"traceutil/trace.go:171","msg":"trace[1244868854] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"331.293009ms","start":"2025-11-15T10:00:19.681849Z","end":"2025-11-15T10:00:20.013142Z","steps":["trace[1244868854] 'process raft request'  (duration: 330.816659ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:00:20.013273Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-15T10:00:19.681838Z","time spent":"331.382244ms","remote":"127.0.0.1:55810","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3830,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" mod_revision:570 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" value_size:3770 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" > >"}
	{"level":"info","ts":"2025-11-15T10:00:20.013637Z","caller":"traceutil/trace.go:171","msg":"trace[820207938] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"333.412451ms","start":"2025-11-15T10:00:19.680209Z","end":"2025-11-15T10:00:20.013621Z","steps":["trace[820207938] 'process raft request'  (duration: 193.529301ms)","trace[820207938] 'compare'  (duration: 137.900129ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:00:20.013753Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-15T10:00:19.680194Z","time spent":"333.513607ms","remote":"127.0.0.1:55572","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1318,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-2q5z9\" mod_revision:569 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-2q5z9\" value_size:1259 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-2q5z9\" > >"}
	{"level":"info","ts":"2025-11-15T10:00:20.224633Z","caller":"traceutil/trace.go:171","msg":"trace[1795514066] linearizableReadLoop","detail":"{readStateIndex:679; appliedIndex:678; }","duration":"202.142719ms","start":"2025-11-15T10:00:20.022466Z","end":"2025-11-15T10:00:20.224609Z","steps":["trace[1795514066] 'read index received'  (duration: 139.95997ms)","trace[1795514066] 'applied index is now lower than readState.Index'  (duration: 62.181699ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:00:20.224694Z","caller":"traceutil/trace.go:171","msg":"trace[654756911] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"203.621608ms","start":"2025-11-15T10:00:20.02105Z","end":"2025-11-15T10:00:20.224671Z","steps":["trace[654756911] 'process raft request'  (duration: 141.433387ms)","trace[654756911] 'compare'  (duration: 62.024449ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:00:20.224806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.33661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-old-k8s-version-335655\" ","response":"range_response_count:1 size:5233"}
	{"level":"info","ts":"2025-11-15T10:00:20.224851Z","caller":"traceutil/trace.go:171","msg":"trace[1799056988] range","detail":"{range_begin:/registry/pods/kube-system/etcd-old-k8s-version-335655; range_end:; response_count:1; response_revision:647; }","duration":"202.407849ms","start":"2025-11-15T10:00:20.022434Z","end":"2025-11-15T10:00:20.224842Z","steps":["trace[1799056988] 'agreement among raft nodes before linearized reading'  (duration: 202.294219ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:00:20.583665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.577827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-ndp6f\" ","response":"range_response_count:1 size:4429"}
	{"level":"info","ts":"2025-11-15T10:00:20.583726Z","caller":"traceutil/trace.go:171","msg":"trace[275117519] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-ndp6f; range_end:; response_count:1; response_revision:647; }","duration":"166.656886ms","start":"2025-11-15T10:00:20.417056Z","end":"2025-11-15T10:00:20.583713Z","steps":["trace[275117519] 'range keys from in-memory index tree'  (duration: 166.46298ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:00:36 up  1:42,  0 user,  load average: 2.56, 2.42, 1.71
	Linux old-k8s-version-335655 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b2da0d5358c4a17df789e6829fc9570ec901a6648cf6554feac6498f10accaa1] <==
	I1115 09:59:45.583714       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 09:59:45.583950       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 09:59:45.584144       1 main.go:148] setting mtu 1500 for CNI 
	I1115 09:59:45.584167       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 09:59:45.584197       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T09:59:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 09:59:45.788132       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 09:59:45.788182       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 09:59:45.788199       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 09:59:45.788346       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 09:59:46.082347       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 09:59:46.082371       1 metrics.go:72] Registering metrics
	I1115 09:59:46.082477       1 controller.go:711] "Syncing nftables rules"
	I1115 09:59:55.788017       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 09:59:55.788090       1 main.go:301] handling current node
	I1115 10:00:05.788686       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:00:05.788732       1 main.go:301] handling current node
	I1115 10:00:15.788350       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:00:15.788410       1 main.go:301] handling current node
	I1115 10:00:25.789597       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:00:25.789639       1 main.go:301] handling current node
	I1115 10:00:35.792726       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:00:35.792770       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8e7e9bd77bc1f1f89b796930001c5a1902359d0cf7e181bc548e5bc2a4ee0988] <==
	I1115 09:59:44.648592       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1115 09:59:44.703528       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:59:44.746851       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1115 09:59:44.747281       1 shared_informer.go:318] Caches are synced for configmaps
	I1115 09:59:44.747555       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1115 09:59:44.747573       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 09:59:44.747769       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1115 09:59:44.747783       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1115 09:59:44.748288       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1115 09:59:44.748343       1 aggregator.go:166] initial CRD sync complete...
	I1115 09:59:44.748353       1 autoregister_controller.go:141] Starting autoregister controller
	I1115 09:59:44.748359       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 09:59:44.748366       1 cache.go:39] Caches are synced for autoregister controller
	I1115 09:59:44.751117       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1115 09:59:45.599794       1 controller.go:624] quota admission added evaluator for: namespaces
	I1115 09:59:45.633196       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1115 09:59:45.651159       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 09:59:45.655346       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 09:59:45.666798       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 09:59:45.675259       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1115 09:59:45.717744       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.97.116"}
	I1115 09:59:45.732265       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.97.187"}
	I1115 09:59:57.159282       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 09:59:57.296961       1 controller.go:624] quota admission added evaluator for: endpoints
	I1115 09:59:57.353147       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4f36f52df9e1823d0f8b7fcb1bd85954b910702e8d94abe040010ef7749c5652] <==
	I1115 09:59:57.361178       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1115 09:59:57.361759       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 09:59:57.365015       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1115 09:59:57.375286       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-kplsv"
	I1115 09:59:57.378632       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-5wmkv"
	I1115 09:59:57.414733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.960817ms"
	I1115 09:59:57.420497       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.788231ms"
	I1115 09:59:57.443838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="23.204781ms"
	I1115 09:59:57.445609       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.817µs"
	I1115 09:59:57.450332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.861µs"
	I1115 09:59:57.465956       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="50.092965ms"
	I1115 09:59:57.494893       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="27.899194ms"
	I1115 09:59:57.495016       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="81.619µs"
	I1115 09:59:57.697896       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 09:59:57.766503       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 09:59:57.766546       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1115 10:00:03.068195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.428304ms"
	I1115 10:00:03.069158       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="84.573µs"
	I1115 10:00:06.069352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="90.172µs"
	I1115 10:00:07.099350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85.531µs"
	I1115 10:00:08.156577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.422µs"
	I1115 10:00:20.014912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="337.546594ms"
	I1115 10:00:20.015719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.848µs"
	I1115 10:00:23.115569       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.681µs"
	I1115 10:00:27.721684       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.063µs"
	
	
	==> kube-proxy [766f51f768df62ae9a4d892911a3e4b3efb88576a90fdfbb7eadf4ae1879169c] <==
	I1115 09:59:45.407809       1 server_others.go:69] "Using iptables proxy"
	I1115 09:59:45.416754       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1115 09:59:45.435241       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:59:45.437772       1 server_others.go:152] "Using iptables Proxier"
	I1115 09:59:45.437804       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1115 09:59:45.437810       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1115 09:59:45.437840       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1115 09:59:45.438066       1 server.go:846] "Version info" version="v1.28.0"
	I1115 09:59:45.438078       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:59:45.438687       1 config.go:188] "Starting service config controller"
	I1115 09:59:45.438759       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1115 09:59:45.438878       1 config.go:97] "Starting endpoint slice config controller"
	I1115 09:59:45.438905       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1115 09:59:45.438928       1 config.go:315] "Starting node config controller"
	I1115 09:59:45.438944       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1115 09:59:45.539640       1 shared_informer.go:318] Caches are synced for service config
	I1115 09:59:45.539684       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1115 09:59:45.539718       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b1fb5f089cc60d72969c503d4ac81cc9dad2cb2197b8fdf047b094dd5609c21c] <==
	I1115 09:59:42.962100       1 serving.go:348] Generated self-signed cert in-memory
	W1115 09:59:44.689109       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 09:59:44.689154       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 09:59:44.689166       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 09:59:44.689176       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 09:59:44.707915       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1115 09:59:44.707943       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:59:44.709438       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:59:44.709474       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1115 09:59:44.710450       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1115 09:59:44.710518       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1115 09:59:44.810292       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 15 09:59:57 old-k8s-version-335655 kubelet[730]: I1115 09:59:57.413212     730 topology_manager.go:215] "Topology Admit Handler" podUID="de87fac4-aa42-4aaf-bb60-25d5a7066747" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-5wmkv"
	Nov 15 09:59:57 old-k8s-version-335655 kubelet[730]: I1115 09:59:57.476731     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5z64\" (UniqueName: \"kubernetes.io/projected/e64f38db-81ec-4f14-8452-b6a897366430-kube-api-access-p5z64\") pod \"dashboard-metrics-scraper-5f989dc9cf-kplsv\" (UID: \"e64f38db-81ec-4f14-8452-b6a897366430\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv"
	Nov 15 09:59:57 old-k8s-version-335655 kubelet[730]: I1115 09:59:57.476916     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/de87fac4-aa42-4aaf-bb60-25d5a7066747-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-5wmkv\" (UID: \"de87fac4-aa42-4aaf-bb60-25d5a7066747\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5wmkv"
	Nov 15 09:59:57 old-k8s-version-335655 kubelet[730]: I1115 09:59:57.476955     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjlh6\" (UniqueName: \"kubernetes.io/projected/de87fac4-aa42-4aaf-bb60-25d5a7066747-kube-api-access-kjlh6\") pod \"kubernetes-dashboard-8694d4445c-5wmkv\" (UID: \"de87fac4-aa42-4aaf-bb60-25d5a7066747\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5wmkv"
	Nov 15 09:59:57 old-k8s-version-335655 kubelet[730]: I1115 09:59:57.476999     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e64f38db-81ec-4f14-8452-b6a897366430-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-kplsv\" (UID: \"e64f38db-81ec-4f14-8452-b6a897366430\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv"
	Nov 15 10:00:03 old-k8s-version-335655 kubelet[730]: I1115 10:00:03.057082     730 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5wmkv" podStartSLOduration=1.364265208 podCreationTimestamp="2025-11-15 09:59:57 +0000 UTC" firstStartedPulling="2025-11-15 09:59:57.741547408 +0000 UTC m=+15.861907025" lastFinishedPulling="2025-11-15 10:00:02.434288967 +0000 UTC m=+20.554648565" observedRunningTime="2025-11-15 10:00:03.056536287 +0000 UTC m=+21.176895892" watchObservedRunningTime="2025-11-15 10:00:03.057006748 +0000 UTC m=+21.177366352"
	Nov 15 10:00:06 old-k8s-version-335655 kubelet[730]: I1115 10:00:06.053669     730 scope.go:117] "RemoveContainer" containerID="969ea3bdbf52750bc7261b23a17f1a33a85d43a35b215980722a4853ae83085b"
	Nov 15 10:00:07 old-k8s-version-335655 kubelet[730]: I1115 10:00:07.057175     730 scope.go:117] "RemoveContainer" containerID="969ea3bdbf52750bc7261b23a17f1a33a85d43a35b215980722a4853ae83085b"
	Nov 15 10:00:07 old-k8s-version-335655 kubelet[730]: I1115 10:00:07.057346     730 scope.go:117] "RemoveContainer" containerID="1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91"
	Nov 15 10:00:07 old-k8s-version-335655 kubelet[730]: E1115 10:00:07.057792     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kplsv_kubernetes-dashboard(e64f38db-81ec-4f14-8452-b6a897366430)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv" podUID="e64f38db-81ec-4f14-8452-b6a897366430"
	Nov 15 10:00:08 old-k8s-version-335655 kubelet[730]: I1115 10:00:08.062207     730 scope.go:117] "RemoveContainer" containerID="1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91"
	Nov 15 10:00:08 old-k8s-version-335655 kubelet[730]: E1115 10:00:08.062611     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kplsv_kubernetes-dashboard(e64f38db-81ec-4f14-8452-b6a897366430)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv" podUID="e64f38db-81ec-4f14-8452-b6a897366430"
	Nov 15 10:00:09 old-k8s-version-335655 kubelet[730]: I1115 10:00:09.064698     730 scope.go:117] "RemoveContainer" containerID="1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91"
	Nov 15 10:00:09 old-k8s-version-335655 kubelet[730]: E1115 10:00:09.065326     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kplsv_kubernetes-dashboard(e64f38db-81ec-4f14-8452-b6a897366430)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv" podUID="e64f38db-81ec-4f14-8452-b6a897366430"
	Nov 15 10:00:16 old-k8s-version-335655 kubelet[730]: I1115 10:00:16.082384     730 scope.go:117] "RemoveContainer" containerID="831cf76b7844ee6e290663629081fd160f5eee162570c153fa316a7695614da3"
	Nov 15 10:00:22 old-k8s-version-335655 kubelet[730]: I1115 10:00:22.972543     730 scope.go:117] "RemoveContainer" containerID="1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91"
	Nov 15 10:00:23 old-k8s-version-335655 kubelet[730]: I1115 10:00:23.103968     730 scope.go:117] "RemoveContainer" containerID="1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91"
	Nov 15 10:00:23 old-k8s-version-335655 kubelet[730]: I1115 10:00:23.104240     730 scope.go:117] "RemoveContainer" containerID="9da1621ca096e17a0f14c287b16cae84d8be8bc11dc4c323425e15c1db1d75d7"
	Nov 15 10:00:23 old-k8s-version-335655 kubelet[730]: E1115 10:00:23.104655     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kplsv_kubernetes-dashboard(e64f38db-81ec-4f14-8452-b6a897366430)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv" podUID="e64f38db-81ec-4f14-8452-b6a897366430"
	Nov 15 10:00:27 old-k8s-version-335655 kubelet[730]: I1115 10:00:27.712364     730 scope.go:117] "RemoveContainer" containerID="9da1621ca096e17a0f14c287b16cae84d8be8bc11dc4c323425e15c1db1d75d7"
	Nov 15 10:00:27 old-k8s-version-335655 kubelet[730]: E1115 10:00:27.712774     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kplsv_kubernetes-dashboard(e64f38db-81ec-4f14-8452-b6a897366430)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv" podUID="e64f38db-81ec-4f14-8452-b6a897366430"
	Nov 15 10:00:33 old-k8s-version-335655 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:00:33 old-k8s-version-335655 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:00:33 old-k8s-version-335655 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 15 10:00:33 old-k8s-version-335655 systemd[1]: kubelet.service: Consumed 1.552s CPU time.
	
	
	==> kubernetes-dashboard [1b0d120a2950f97fb086bc8728a2fc50b1cc4017835ef14197769a9e88ee301b] <==
	2025/11/15 10:00:02 Using namespace: kubernetes-dashboard
	2025/11/15 10:00:02 Using in-cluster config to connect to apiserver
	2025/11/15 10:00:02 Using secret token for csrf signing
	2025/11/15 10:00:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:00:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:00:02 Successful initial request to the apiserver, version: v1.28.0
	2025/11/15 10:00:02 Generating JWE encryption key
	2025/11/15 10:00:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:00:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:00:02 Initializing JWE encryption key from synchronized object
	2025/11/15 10:00:02 Creating in-cluster Sidecar client
	2025/11/15 10:00:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:00:02 Serving insecurely on HTTP port: 9090
	2025/11/15 10:00:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:00:02 Starting overwatch
	
	
	==> storage-provisioner [831cf76b7844ee6e290663629081fd160f5eee162570c153fa316a7695614da3] <==
	I1115 09:59:45.369700       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:00:15.371932       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a29f139a8109040ea93e6b686169546ab3f7572e5964c616ddfe4b0109c18e09] <==
	I1115 10:00:16.138194       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:00:16.147943       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:00:16.148062       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1115 10:00:33.620469       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:00:33.620598       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40ccfd94-ee2b-478f-91d9-d71b353df891", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-335655_df0d473d-21f1-4464-bb0c-48b6ee93ad04 became leader
	I1115 10:00:33.620662       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-335655_df0d473d-21f1-4464-bb0c-48b6ee93ad04!
	I1115 10:00:33.721198       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-335655_df0d473d-21f1-4464-bb0c-48b6ee93ad04!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335655 -n old-k8s-version-335655
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335655 -n old-k8s-version-335655: exit status 2 (343.466047ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-335655 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-335655
helpers_test.go:243: (dbg) docker inspect old-k8s-version-335655:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482",
	        "Created": "2025-11-15T09:58:23.178019961Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 600217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:59:35.475920305Z",
	            "FinishedAt": "2025-11-15T09:59:34.549290842Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482/hostname",
	        "HostsPath": "/var/lib/docker/containers/e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482/hosts",
	        "LogPath": "/var/lib/docker/containers/e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482/e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482-json.log",
	        "Name": "/old-k8s-version-335655",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-335655:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-335655",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e7381b09c1c22ece77e9dc728eea16f79ad8d4299f370c9bd73784f75c376482",
	                "LowerDir": "/var/lib/docker/overlay2/511bf1a954888ba81e4e64e727b739994a85683cfd70df622078393659c03bfa-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/511bf1a954888ba81e4e64e727b739994a85683cfd70df622078393659c03bfa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/511bf1a954888ba81e4e64e727b739994a85683cfd70df622078393659c03bfa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/511bf1a954888ba81e4e64e727b739994a85683cfd70df622078393659c03bfa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-335655",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-335655/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-335655",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-335655",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-335655",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4f9181f9e2b158e2da877f76455f6df83441c94c111397d330ec15102306170d",
	            "SandboxKey": "/var/run/docker/netns/4f9181f9e2b1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-335655": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5f22abf6c460e469b71da8d9c04b0cc70f79b863fc7fb95c973cc15281dd62ec",
	                    "EndpointID": "cace58f03fa557d78b4d456d1eefffa03600add9ace64e163e442f5df227031d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "1e:9d:47:c8:31:9c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-335655",
	                        "e7381b09c1c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335655 -n old-k8s-version-335655
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335655 -n old-k8s-version-335655: exit status 2 (384.11485ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-335655 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-335655 logs -n 25: (1.149506051s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-flag-896620                                                                                                                                                                                                                  │ force-systemd-flag-896620 │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p cert-options-759344 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ stop    │ -p NoKubernetes-941483                                                                                                                                                                                                                        │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p NoKubernetes-941483 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ ssh     │ -p NoKubernetes-941483 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │                     │
	│ delete  │ -p NoKubernetes-941483                                                                                                                                                                                                                        │ NoKubernetes-941483       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p old-k8s-version-335655 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:59 UTC │
	│ ssh     │ cert-options-759344 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ ssh     │ -p cert-options-759344 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ delete  │ -p cert-options-759344                                                                                                                                                                                                                        │ cert-options-759344       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-559401         │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:59 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-335655 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │                     │
	│ stop    │ -p old-k8s-version-335655 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-559401 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-559401         │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │                     │
	│ stop    │ -p no-preload-559401 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-559401         │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-335655 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ start   │ -p old-k8s-version-335655 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 10:00 UTC │
	│ addons  │ enable dashboard -p no-preload-559401 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-559401         │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ start   │ -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-559401         │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-405833 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ start   │ -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-405833 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p kubernetes-upgrade-405833                                                                                                                                                                                                                  │ kubernetes-upgrade-405833 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p embed-certs-430513 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-430513        │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ image   │ old-k8s-version-335655 image list --format=json                                                                                                                                                                                               │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ pause   │ -p old-k8s-version-335655 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-335655    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:00:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:00:15.708610  608059 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:00:15.708779  608059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:00:15.708791  608059 out.go:374] Setting ErrFile to fd 2...
	I1115 10:00:15.708798  608059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:00:15.709098  608059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:00:15.709669  608059 out.go:368] Setting JSON to false
	I1115 10:00:15.710855  608059 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6157,"bootTime":1763194659,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:00:15.710912  608059 start.go:143] virtualization: kvm guest
	I1115 10:00:15.712856  608059 out.go:179] * [embed-certs-430513] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:00:15.714277  608059 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:00:15.714349  608059 notify.go:221] Checking for updates...
	I1115 10:00:15.717017  608059 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:00:15.718364  608059 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:00:15.719714  608059 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 10:00:15.723569  608059 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:00:15.724770  608059 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:00:15.726349  608059 config.go:182] Loaded profile config "cert-expiration-341243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:00:15.726491  608059 config.go:182] Loaded profile config "no-preload-559401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:00:15.726613  608059 config.go:182] Loaded profile config "old-k8s-version-335655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1115 10:00:15.726735  608059 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:00:15.753551  608059 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:00:15.753732  608059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:00:15.813899  608059 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:00:15.803755416 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:00:15.814001  608059 docker.go:319] overlay module found
	I1115 10:00:15.815838  608059 out.go:179] * Using the docker driver based on user configuration
	I1115 10:00:15.817196  608059 start.go:309] selected driver: docker
	I1115 10:00:15.817213  608059 start.go:930] validating driver "docker" against <nil>
	I1115 10:00:15.817228  608059 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:00:15.818017  608059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:00:15.880976  608059 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:00:15.870477014 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:00:15.881150  608059 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:00:15.881366  608059 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:00:15.883094  608059 out.go:179] * Using Docker driver with root privileges
	I1115 10:00:15.884301  608059 cni.go:84] Creating CNI manager for ""
	I1115 10:00:15.884373  608059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:00:15.884388  608059 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:00:15.884469  608059 start.go:353] cluster config:
	{Name:embed-certs-430513 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:00:15.885777  608059 out.go:179] * Starting "embed-certs-430513" primary control-plane node in "embed-certs-430513" cluster
	I1115 10:00:15.886890  608059 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:00:15.888287  608059 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:00:15.889743  608059 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:00:15.889787  608059 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:00:15.889823  608059 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:00:15.889841  608059 cache.go:65] Caching tarball of preloaded images
	I1115 10:00:15.889925  608059 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:00:15.889938  608059 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:00:15.890031  608059 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/config.json ...
	I1115 10:00:15.890049  608059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/config.json: {Name:mk89b01ce76928fbbfb611abf2c1b13ff91226bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:15.911596  608059 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:00:15.911628  608059 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:00:15.911649  608059 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:00:15.911689  608059 start.go:360] acquireMachinesLock for embed-certs-430513: {Name:mk23e9dcdc23745b328473e6d9e82c519bc86048 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:00:15.911808  608059 start.go:364] duration metric: took 95.323µs to acquireMachinesLock for "embed-certs-430513"
	I1115 10:00:15.911843  608059 start.go:93] Provisioning new machine with config: &{Name:embed-certs-430513 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:00:15.911951  608059 start.go:125] createHost starting for "" (driver="docker")
	W1115 10:00:16.686512  603112 pod_ready.go:104] pod "coredns-66bc5c9577-dh55n" is not "Ready", error: <nil>
	W1115 10:00:18.687172  603112 pod_ready.go:104] pod "coredns-66bc5c9577-dh55n" is not "Ready", error: <nil>
	W1115 10:00:15.791584  599971 pod_ready.go:104] pod "coredns-5dd5756b68-j8hqh" is not "Ready", error: <nil>
	W1115 10:00:18.291021  599971 pod_ready.go:104] pod "coredns-5dd5756b68-j8hqh" is not "Ready", error: <nil>
	I1115 10:00:20.017697  599971 pod_ready.go:94] pod "coredns-5dd5756b68-j8hqh" is "Ready"
	I1115 10:00:20.017729  599971 pod_ready.go:86] duration metric: took 33.732797509s for pod "coredns-5dd5756b68-j8hqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.021145  599971 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.228598  599971 pod_ready.go:94] pod "etcd-old-k8s-version-335655" is "Ready"
	I1115 10:00:20.228629  599971 pod_ready.go:86] duration metric: took 207.456823ms for pod "etcd-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.231416  599971 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.236192  599971 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-335655" is "Ready"
	I1115 10:00:20.236214  599971 pod_ready.go:86] duration metric: took 4.77503ms for pod "kube-apiserver-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:15.914077  608059 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:00:15.914357  608059 start.go:159] libmachine.API.Create for "embed-certs-430513" (driver="docker")
	I1115 10:00:15.914427  608059 client.go:173] LocalClient.Create starting
	I1115 10:00:15.914527  608059 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem
	I1115 10:00:15.914571  608059 main.go:143] libmachine: Decoding PEM data...
	I1115 10:00:15.914596  608059 main.go:143] libmachine: Parsing certificate...
	I1115 10:00:15.914687  608059 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem
	I1115 10:00:15.914720  608059 main.go:143] libmachine: Decoding PEM data...
	I1115 10:00:15.914737  608059 main.go:143] libmachine: Parsing certificate...
	I1115 10:00:15.915235  608059 cli_runner.go:164] Run: docker network inspect embed-certs-430513 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:00:15.933734  608059 cli_runner.go:211] docker network inspect embed-certs-430513 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:00:15.933816  608059 network_create.go:284] running [docker network inspect embed-certs-430513] to gather additional debugging logs...
	I1115 10:00:15.933836  608059 cli_runner.go:164] Run: docker network inspect embed-certs-430513
	W1115 10:00:15.951772  608059 cli_runner.go:211] docker network inspect embed-certs-430513 returned with exit code 1
	I1115 10:00:15.951803  608059 network_create.go:287] error running [docker network inspect embed-certs-430513]: docker network inspect embed-certs-430513: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-430513 not found
	I1115 10:00:15.951820  608059 network_create.go:289] output of [docker network inspect embed-certs-430513]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-430513 not found
	
	** /stderr **
	I1115 10:00:15.951950  608059 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:00:15.970811  608059 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a8fb985664d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:ab:70:dd:9f:65} reservation:<nil>}
	I1115 10:00:15.971720  608059 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cc9c79f9c19e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:9a:52:90:2e:14} reservation:<nil>}
	I1115 10:00:15.972269  608059 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-309565720ebf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:66:38:13:6a:5d} reservation:<nil>}
	I1115 10:00:15.973282  608059 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001daad40}
	I1115 10:00:15.973308  608059 network_create.go:124] attempt to create docker network embed-certs-430513 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1115 10:00:15.973370  608059 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-430513 embed-certs-430513
	I1115 10:00:16.025651  608059 network_create.go:108] docker network embed-certs-430513 192.168.76.0/24 created
	I1115 10:00:16.025700  608059 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-430513" container
	I1115 10:00:16.025774  608059 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:00:16.045102  608059 cli_runner.go:164] Run: docker volume create embed-certs-430513 --label name.minikube.sigs.k8s.io=embed-certs-430513 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:00:16.064668  608059 oci.go:103] Successfully created a docker volume embed-certs-430513
	I1115 10:00:16.064767  608059 cli_runner.go:164] Run: docker run --rm --name embed-certs-430513-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-430513 --entrypoint /usr/bin/test -v embed-certs-430513:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:00:16.494238  608059 oci.go:107] Successfully prepared a docker volume embed-certs-430513
	I1115 10:00:16.494315  608059 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:00:16.494328  608059 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:00:16.494410  608059 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-430513:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:00:20.238943  599971 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.243717  599971 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-335655" is "Ready"
	I1115 10:00:20.243742  599971 pod_ready.go:86] duration metric: took 4.776276ms for pod "kube-controller-manager-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.246138  599971 pod_ready.go:83] waiting for pod "kube-proxy-ndp6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.618258  599971 pod_ready.go:94] pod "kube-proxy-ndp6f" is "Ready"
	I1115 10:00:20.618286  599971 pod_ready.go:86] duration metric: took 372.130031ms for pod "kube-proxy-ndp6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:20.818277  599971 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:21.217877  599971 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-335655" is "Ready"
	I1115 10:00:21.217910  599971 pod_ready.go:86] duration metric: took 399.603947ms for pod "kube-scheduler-old-k8s-version-335655" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:21.217923  599971 pod_ready.go:40] duration metric: took 34.937944844s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:00:21.267153  599971 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1115 10:00:21.269123  599971 out.go:203] 
	W1115 10:00:21.270236  599971 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1115 10:00:21.271264  599971 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1115 10:00:21.272335  599971 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-335655" cluster and "default" namespace by default
	W1115 10:00:21.186510  603112 pod_ready.go:104] pod "coredns-66bc5c9577-dh55n" is not "Ready", error: <nil>
	W1115 10:00:23.685284  603112 pod_ready.go:104] pod "coredns-66bc5c9577-dh55n" is not "Ready", error: <nil>
	I1115 10:00:20.937303  608059 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-430513:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.442842168s)
	I1115 10:00:20.937342  608059 kic.go:203] duration metric: took 4.443007965s to extract preloaded images to volume ...
	W1115 10:00:20.937489  608059 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1115 10:00:20.937548  608059 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1115 10:00:20.937596  608059 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:00:20.996296  608059 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-430513 --name embed-certs-430513 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-430513 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-430513 --network embed-certs-430513 --ip 192.168.76.2 --volume embed-certs-430513:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:00:21.316008  608059 cli_runner.go:164] Run: docker container inspect embed-certs-430513 --format={{.State.Running}}
	I1115 10:00:21.338089  608059 cli_runner.go:164] Run: docker container inspect embed-certs-430513 --format={{.State.Status}}
	I1115 10:00:21.358812  608059 cli_runner.go:164] Run: docker exec embed-certs-430513 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:00:21.413243  608059 oci.go:144] the created container "embed-certs-430513" has a running status.
	I1115 10:00:21.413320  608059 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa...
	I1115 10:00:22.057988  608059 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:00:22.084243  608059 cli_runner.go:164] Run: docker container inspect embed-certs-430513 --format={{.State.Status}}
	I1115 10:00:22.103850  608059 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:00:22.103871  608059 kic_runner.go:114] Args: [docker exec --privileged embed-certs-430513 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:00:22.155819  608059 cli_runner.go:164] Run: docker container inspect embed-certs-430513 --format={{.State.Status}}
	I1115 10:00:22.174488  608059 machine.go:94] provisionDockerMachine start ...
	I1115 10:00:22.174623  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:22.193449  608059 main.go:143] libmachine: Using SSH client type: native
	I1115 10:00:22.193742  608059 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1115 10:00:22.193760  608059 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:00:22.323704  608059 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-430513
	
	I1115 10:00:22.323737  608059 ubuntu.go:182] provisioning hostname "embed-certs-430513"
	I1115 10:00:22.323807  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:22.342280  608059 main.go:143] libmachine: Using SSH client type: native
	I1115 10:00:22.342553  608059 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1115 10:00:22.342571  608059 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-430513 && echo "embed-certs-430513" | sudo tee /etc/hostname
	I1115 10:00:22.480669  608059 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-430513
	
	I1115 10:00:22.480750  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:22.498628  608059 main.go:143] libmachine: Using SSH client type: native
	I1115 10:00:22.498868  608059 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1115 10:00:22.498895  608059 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-430513' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-430513/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-430513' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:00:22.627263  608059 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:00:22.627294  608059 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 10:00:22.627326  608059 ubuntu.go:190] setting up certificates
	I1115 10:00:22.627338  608059 provision.go:84] configureAuth start
	I1115 10:00:22.627424  608059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-430513
	I1115 10:00:22.647603  608059 provision.go:143] copyHostCerts
	I1115 10:00:22.647684  608059 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 10:00:22.647702  608059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 10:00:22.647796  608059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 10:00:22.647973  608059 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 10:00:22.647984  608059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 10:00:22.648029  608059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 10:00:22.648135  608059 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 10:00:22.648147  608059 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 10:00:22.648190  608059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 10:00:22.648280  608059 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.embed-certs-430513 san=[127.0.0.1 192.168.76.2 embed-certs-430513 localhost minikube]
	I1115 10:00:23.554592  608059 provision.go:177] copyRemoteCerts
	I1115 10:00:23.554665  608059 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:00:23.554705  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:23.573297  608059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:00:23.669362  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:00:23.690532  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1115 10:00:23.709789  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:00:23.728707  608059 provision.go:87] duration metric: took 1.101353214s to configureAuth
	I1115 10:00:23.728737  608059 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:00:23.728924  608059 config.go:182] Loaded profile config "embed-certs-430513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:00:23.729041  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:23.748048  608059 main.go:143] libmachine: Using SSH client type: native
	I1115 10:00:23.748347  608059 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I1115 10:00:23.748366  608059 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:00:23.995027  608059 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:00:23.995059  608059 machine.go:97] duration metric: took 1.820544085s to provisionDockerMachine
	I1115 10:00:23.995072  608059 client.go:176] duration metric: took 8.0806338s to LocalClient.Create
	I1115 10:00:23.995100  608059 start.go:167] duration metric: took 8.080741754s to libmachine.API.Create "embed-certs-430513"
	I1115 10:00:23.995112  608059 start.go:293] postStartSetup for "embed-certs-430513" (driver="docker")
	I1115 10:00:23.995125  608059 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:00:23.995181  608059 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:00:23.995218  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:24.013672  608059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:00:24.110197  608059 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:00:24.113800  608059 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:00:24.113839  608059 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:00:24.113853  608059 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 10:00:24.113909  608059 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 10:00:24.114000  608059 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 10:00:24.114119  608059 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:00:24.121885  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:00:24.142235  608059 start.go:296] duration metric: took 147.103643ms for postStartSetup
	I1115 10:00:24.142620  608059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-430513
	I1115 10:00:24.161798  608059 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/config.json ...
	I1115 10:00:24.162079  608059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:00:24.162129  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:24.180084  608059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:00:24.271622  608059 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:00:24.276270  608059 start.go:128] duration metric: took 8.364301752s to createHost
	I1115 10:00:24.276298  608059 start.go:83] releasing machines lock for "embed-certs-430513", held for 8.364472329s
	I1115 10:00:24.276373  608059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-430513
	I1115 10:00:24.294034  608059 ssh_runner.go:195] Run: cat /version.json
	I1115 10:00:24.294073  608059 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:00:24.294087  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:24.294133  608059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:00:24.313667  608059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:00:24.314036  608059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:00:24.403541  608059 ssh_runner.go:195] Run: systemctl --version
	I1115 10:00:24.458025  608059 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:00:24.493534  608059 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:00:24.498309  608059 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:00:24.498375  608059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:00:24.525747  608059 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:00:24.525776  608059 start.go:496] detecting cgroup driver to use...
	I1115 10:00:24.525811  608059 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 10:00:24.525861  608059 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:00:24.542854  608059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:00:24.556777  608059 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:00:24.556841  608059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:00:24.574640  608059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:00:24.593466  608059 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:00:24.688945  608059 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:00:24.790167  608059 docker.go:234] disabling docker service ...
	I1115 10:00:24.790241  608059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:00:24.809552  608059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:00:24.823108  608059 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:00:24.910944  608059 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:00:24.997605  608059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:00:25.010605  608059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:00:25.025277  608059 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:00:25.025332  608059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:00:25.036344  608059 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 10:00:25.036438  608059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:00:25.045991  608059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:00:25.055103  608059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:00:25.064486  608059 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:00:25.072778  608059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:00:25.081502  608059 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:00:25.096171  608059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:00:25.105407  608059 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:00:25.113841  608059 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:00:25.121252  608059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:00:25.207550  608059 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:00:25.308517  608059 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:00:25.308592  608059 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:00:25.312429  608059 start.go:564] Will wait 60s for crictl version
	I1115 10:00:25.312491  608059 ssh_runner.go:195] Run: which crictl
	I1115 10:00:25.316035  608059 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:00:25.339974  608059 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:00:25.340046  608059 ssh_runner.go:195] Run: crio --version
	I1115 10:00:25.368492  608059 ssh_runner.go:195] Run: crio --version
	I1115 10:00:25.398381  608059 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:00:25.399561  608059 cli_runner.go:164] Run: docker network inspect embed-certs-430513 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:00:25.417461  608059 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:00:25.421561  608059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:00:25.431735  608059 kubeadm.go:884] updating cluster {Name:embed-certs-430513 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:00:25.431862  608059 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:00:25.431911  608059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:00:25.463605  608059 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:00:25.463627  608059 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:00:25.463673  608059 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:00:25.488815  608059 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:00:25.488840  608059 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:00:25.488848  608059 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:00:25.488935  608059 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-430513 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:00:25.489000  608059 ssh_runner.go:195] Run: crio config
	I1115 10:00:25.536667  608059 cni.go:84] Creating CNI manager for ""
	I1115 10:00:25.536690  608059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:00:25.536707  608059 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:00:25.536727  608059 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-430513 NodeName:embed-certs-430513 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:00:25.536847  608059 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-430513"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:00:25.536924  608059 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:00:25.545726  608059 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:00:25.545801  608059 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:00:25.554377  608059 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:00:25.567263  608059 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:00:25.582981  608059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1115 10:00:25.596385  608059 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:00:25.600345  608059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:00:25.610171  608059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:00:25.693555  608059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:00:25.718140  608059 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513 for IP: 192.168.76.2
	I1115 10:00:25.718166  608059 certs.go:195] generating shared ca certs ...
	I1115 10:00:25.718183  608059 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:25.718313  608059 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 10:00:25.718352  608059 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 10:00:25.718364  608059 certs.go:257] generating profile certs ...
	I1115 10:00:25.718453  608059 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/client.key
	I1115 10:00:25.718482  608059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/client.crt with IP's: []
	I1115 10:00:26.003100  608059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/client.crt ...
	I1115 10:00:26.003130  608059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/client.crt: {Name:mkb008b092b0f5082d52920a5c4e51fed899848a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:26.003341  608059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/client.key ...
	I1115 10:00:26.003359  608059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/client.key: {Name:mke0cc7d4d5a62cc74c4376c34c7bd81d9e66b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:26.003513  608059 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.key.866022bc
	I1115 10:00:26.003535  608059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.crt.866022bc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1115 10:00:26.060146  608059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.crt.866022bc ...
	I1115 10:00:26.060181  608059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.crt.866022bc: {Name:mk98203d099698eabd8febb1d6a468744cdc7f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:26.060427  608059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.key.866022bc ...
	I1115 10:00:26.060465  608059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.key.866022bc: {Name:mk447eeb36b8cf0dcbd09f968a274822e2f6fe1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:26.060588  608059 certs.go:382] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.crt.866022bc -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.crt
	I1115 10:00:26.060724  608059 certs.go:386] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.key.866022bc -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.key
	I1115 10:00:26.060821  608059 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.key
	I1115 10:00:26.060846  608059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.crt with IP's: []
	I1115 10:00:26.326074  608059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.crt ...
	I1115 10:00:26.326106  608059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.crt: {Name:mkaf88efb27c2b61bc2261a5617b43f435eb6639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:26.326312  608059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.key ...
	I1115 10:00:26.326331  608059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.key: {Name:mka8787fffeb76423c9f58f8f91426a77ea1cb45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:26.326579  608059 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 10:00:26.326627  608059 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 10:00:26.326644  608059 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:00:26.326677  608059 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:00:26.326709  608059 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:00:26.326741  608059 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 10:00:26.326796  608059 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:00:26.327406  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:00:26.345995  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:00:26.363310  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:00:26.380995  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:00:26.398169  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1115 10:00:26.415294  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:00:26.432304  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:00:26.449701  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:00:26.466737  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 10:00:26.486838  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 10:00:26.505142  608059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:00:26.523118  608059 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:00:26.536239  608059 ssh_runner.go:195] Run: openssl version
	I1115 10:00:26.542880  608059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 10:00:26.552341  608059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 10:00:26.556844  608059 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 10:00:26.556908  608059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 10:00:26.592710  608059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 10:00:26.601778  608059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 10:00:26.610370  608059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 10:00:26.614363  608059 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 10:00:26.614440  608059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 10:00:26.653659  608059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:00:26.664241  608059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:00:26.673375  608059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:00:26.678176  608059 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:00:26.678242  608059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:00:26.715944  608059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:00:26.725194  608059 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:00:26.728972  608059 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:00:26.729041  608059 kubeadm.go:401] StartCluster: {Name:embed-certs-430513 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:00:26.729112  608059 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:00:26.729178  608059 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:00:26.758173  608059 cri.go:89] found id: ""
	I1115 10:00:26.758240  608059 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:00:26.766619  608059 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:00:26.774716  608059 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:00:26.774783  608059 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:00:26.783166  608059 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:00:26.783187  608059 kubeadm.go:158] found existing configuration files:
	
	I1115 10:00:26.783230  608059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:00:26.790828  608059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:00:26.790897  608059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:00:26.798419  608059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:00:26.805865  608059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:00:26.805924  608059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:00:26.813233  608059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:00:26.821176  608059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:00:26.821240  608059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:00:26.828771  608059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:00:26.836267  608059 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:00:26.836320  608059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:00:26.843702  608059 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:00:26.882311  608059 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:00:26.883042  608059 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:00:26.903704  608059 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:00:26.903800  608059 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:00:26.903842  608059 kubeadm.go:319] OS: Linux
	I1115 10:00:26.903918  608059 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:00:26.904018  608059 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:00:26.904096  608059 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:00:26.904170  608059 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:00:26.904235  608059 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:00:26.904293  608059 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:00:26.904343  608059 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:00:26.904430  608059 kubeadm.go:319] CGROUPS_IO: enabled
	I1115 10:00:26.966935  608059 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:00:26.967094  608059 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:00:26.967243  608059 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:00:26.974633  608059 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1115 10:00:25.686292  603112 pod_ready.go:104] pod "coredns-66bc5c9577-dh55n" is not "Ready", error: <nil>
	W1115 10:00:28.185579  603112 pod_ready.go:104] pod "coredns-66bc5c9577-dh55n" is not "Ready", error: <nil>
	I1115 10:00:26.976940  608059 out.go:252]   - Generating certificates and keys ...
	I1115 10:00:26.977040  608059 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:00:26.977118  608059 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:00:27.027586  608059 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:00:27.458429  608059 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:00:27.915514  608059 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:00:28.215219  608059 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:00:28.375535  608059 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:00:28.375686  608059 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-430513 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 10:00:28.585842  608059 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:00:28.585985  608059 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-430513 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1115 10:00:29.063025  608059 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:00:29.134290  608059 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:00:29.232729  608059 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:00:29.232870  608059 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:00:29.519986  608059 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:00:29.625584  608059 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:00:30.340903  608059 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:00:30.591665  608059 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:00:30.864108  608059 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:00:30.864730  608059 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:00:30.869146  608059 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1115 10:00:30.186256  603112 pod_ready.go:104] pod "coredns-66bc5c9577-dh55n" is not "Ready", error: <nil>
	I1115 10:00:32.185794  603112 pod_ready.go:94] pod "coredns-66bc5c9577-dh55n" is "Ready"
	I1115 10:00:32.185826  603112 pod_ready.go:86] duration metric: took 31.505696726s for pod "coredns-66bc5c9577-dh55n" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.188464  603112 pod_ready.go:83] waiting for pod "etcd-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.192317  603112 pod_ready.go:94] pod "etcd-no-preload-559401" is "Ready"
	I1115 10:00:32.192346  603112 pod_ready.go:86] duration metric: took 3.858836ms for pod "etcd-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.194907  603112 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.198783  603112 pod_ready.go:94] pod "kube-apiserver-no-preload-559401" is "Ready"
	I1115 10:00:32.198807  603112 pod_ready.go:86] duration metric: took 3.876948ms for pod "kube-apiserver-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.200807  603112 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.384127  603112 pod_ready.go:94] pod "kube-controller-manager-no-preload-559401" is "Ready"
	I1115 10:00:32.384156  603112 pod_ready.go:86] duration metric: took 183.328238ms for pod "kube-controller-manager-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.584252  603112 pod_ready.go:83] waiting for pod "kube-proxy-sbk5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:32.984061  603112 pod_ready.go:94] pod "kube-proxy-sbk5r" is "Ready"
	I1115 10:00:32.984093  603112 pod_ready.go:86] duration metric: took 399.809892ms for pod "kube-proxy-sbk5r" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:33.184554  603112 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:33.586136  603112 pod_ready.go:94] pod "kube-scheduler-no-preload-559401" is "Ready"
	I1115 10:00:33.586168  603112 pod_ready.go:86] duration metric: took 401.589365ms for pod "kube-scheduler-no-preload-559401" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:00:33.586182  603112 pod_ready.go:40] duration metric: took 32.911878851s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:00:33.643015  603112 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:00:33.644942  603112 out.go:179] * Done! kubectl is now configured to use "no-preload-559401" cluster and "default" namespace by default
	I1115 10:00:30.870570  608059 out.go:252]   - Booting up control plane ...
	I1115 10:00:30.870697  608059 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:00:30.870815  608059 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:00:30.871457  608059 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:00:30.885832  608059 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:00:30.886028  608059 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:00:30.894694  608059 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:00:30.894943  608059 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:00:30.894992  608059 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:00:31.003478  608059 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:00:31.003644  608059 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:00:31.505240  608059 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.954166ms
	I1115 10:00:31.508211  608059 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:00:31.508350  608059 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1115 10:00:31.508473  608059 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:00:31.508542  608059 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:00:32.787513  608059 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.279104778s
	I1115 10:00:33.617304  608059 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.108868122s
	I1115 10:00:35.509674  608059 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001320336s
	I1115 10:00:35.521611  608059 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:00:35.533102  608059 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:00:35.541839  608059 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:00:35.542150  608059 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-430513 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:00:35.552776  608059 kubeadm.go:319] [bootstrap-token] Using token: pglc98.7ltrjwdqt15vefru
	I1115 10:00:35.553972  608059 out.go:252]   - Configuring RBAC rules ...
	I1115 10:00:35.554139  608059 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:00:35.557754  608059 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:00:35.564570  608059 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:00:35.567042  608059 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:00:35.569378  608059 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:00:35.571735  608059 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:00:35.916717  608059 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:00:36.332499  608059 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:00:36.917188  608059 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:00:36.918625  608059 kubeadm.go:319] 
	I1115 10:00:36.918719  608059 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:00:36.918753  608059 kubeadm.go:319] 
	I1115 10:00:36.918865  608059 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:00:36.918875  608059 kubeadm.go:319] 
	I1115 10:00:36.918907  608059 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:00:36.918997  608059 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:00:36.919080  608059 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:00:36.919112  608059 kubeadm.go:319] 
	I1115 10:00:36.919197  608059 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:00:36.919208  608059 kubeadm.go:319] 
	I1115 10:00:36.919271  608059 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:00:36.919280  608059 kubeadm.go:319] 
	I1115 10:00:36.919470  608059 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:00:36.919578  608059 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:00:36.919669  608059 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:00:36.919690  608059 kubeadm.go:319] 
	I1115 10:00:36.919843  608059 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:00:36.919946  608059 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:00:36.919953  608059 kubeadm.go:319] 
	I1115 10:00:36.920078  608059 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token pglc98.7ltrjwdqt15vefru \
	I1115 10:00:36.920251  608059 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac \
	I1115 10:00:36.920284  608059 kubeadm.go:319] 	--control-plane 
	I1115 10:00:36.920293  608059 kubeadm.go:319] 
	I1115 10:00:36.920449  608059 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:00:36.920462  608059 kubeadm.go:319] 
	I1115 10:00:36.920577  608059 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token pglc98.7ltrjwdqt15vefru \
	I1115 10:00:36.920729  608059 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac 
	I1115 10:00:36.924720  608059 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:00:36.924880  608059 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:00:36.924921  608059 cni.go:84] Creating CNI manager for ""
	I1115 10:00:36.924936  608059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:00:36.926503  608059 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Nov 15 10:00:06 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:06.109418729Z" level=info msg="Started container" PID=1720 containerID=1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv/dashboard-metrics-scraper id=cff55503-8be1-4639-937b-1d2e9748276b name=/runtime.v1.RuntimeService/StartContainer sandboxID=73ed87c5251d9adbd2558562cb7cbfe8be871ee4d281d377b1086ae27cde8b4e
	Nov 15 10:00:07 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:07.068921459Z" level=info msg="Removing container: 969ea3bdbf52750bc7261b23a17f1a33a85d43a35b215980722a4853ae83085b" id=33413fb2-ba2f-4973-a46e-9e9df3148370 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:00:07 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:07.154192569Z" level=info msg="Removed container 969ea3bdbf52750bc7261b23a17f1a33a85d43a35b215980722a4853ae83085b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv/dashboard-metrics-scraper" id=33413fb2-ba2f-4973-a46e-9e9df3148370 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.082865994Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d372afb0-9eb2-4787-9990-48e92bd0d328 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.083886505Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=df4eef44-1de2-4f05-8f95-6c60f3617778 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.08567302Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d702fe53-521f-4533-9335-77a36f59178e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.085833389Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.092418618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.092816315Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/175e54009fa23c086f64f7a23e7914e9164cb78eecbe1be635688b722661227b/merged/etc/passwd: no such file or directory"
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.092951312Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/175e54009fa23c086f64f7a23e7914e9164cb78eecbe1be635688b722661227b/merged/etc/group: no such file or directory"
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.093364574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.123538112Z" level=info msg="Created container a29f139a8109040ea93e6b686169546ab3f7572e5964c616ddfe4b0109c18e09: kube-system/storage-provisioner/storage-provisioner" id=d702fe53-521f-4533-9335-77a36f59178e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.124131462Z" level=info msg="Starting container: a29f139a8109040ea93e6b686169546ab3f7572e5964c616ddfe4b0109c18e09" id=91c8a957-6ed7-4a68-a0af-720bcc1ae0c1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:00:16 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:16.126076619Z" level=info msg="Started container" PID=1734 containerID=a29f139a8109040ea93e6b686169546ab3f7572e5964c616ddfe4b0109c18e09 description=kube-system/storage-provisioner/storage-provisioner id=91c8a957-6ed7-4a68-a0af-720bcc1ae0c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7838aa7b61c7723f6ef4e7106bd211f5c7abedc00f563fe8e55a5cf2323df99b
	Nov 15 10:00:22 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:22.973226209Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=df59ef88-3f5c-451f-98ce-e14fe68b5ac6 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:22 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:22.974352425Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=751450b0-8c2e-4e24-9da6-4b1d62c23f41 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:22 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:22.975372124Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv/dashboard-metrics-scraper" id=e3879e8a-5057-484f-a30f-658862d97428 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:22 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:22.975558293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:22 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:22.982037666Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:22 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:22.98252739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:23 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:23.021917811Z" level=info msg="Created container 9da1621ca096e17a0f14c287b16cae84d8be8bc11dc4c323425e15c1db1d75d7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv/dashboard-metrics-scraper" id=e3879e8a-5057-484f-a30f-658862d97428 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:23 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:23.022648891Z" level=info msg="Starting container: 9da1621ca096e17a0f14c287b16cae84d8be8bc11dc4c323425e15c1db1d75d7" id=bd3327f7-79e5-4ef4-8aac-aa21c9f7faf3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:00:23 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:23.024414167Z" level=info msg="Started container" PID=1771 containerID=9da1621ca096e17a0f14c287b16cae84d8be8bc11dc4c323425e15c1db1d75d7 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv/dashboard-metrics-scraper id=bd3327f7-79e5-4ef4-8aac-aa21c9f7faf3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=73ed87c5251d9adbd2558562cb7cbfe8be871ee4d281d377b1086ae27cde8b4e
	Nov 15 10:00:23 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:23.105264168Z" level=info msg="Removing container: 1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91" id=9b824da6-5da8-47f5-83c1-dd48dd1489b1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:00:23 old-k8s-version-335655 crio[567]: time="2025-11-15T10:00:23.114588002Z" level=info msg="Removed container 1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv/dashboard-metrics-scraper" id=9b824da6-5da8-47f5-83c1-dd48dd1489b1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	9da1621ca096e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   73ed87c5251d9       dashboard-metrics-scraper-5f989dc9cf-kplsv       kubernetes-dashboard
	a29f139a81090       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   7838aa7b61c77       storage-provisioner                              kube-system
	1b0d120a2950f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   ffea7eb875bda       kubernetes-dashboard-8694d4445c-5wmkv            kubernetes-dashboard
	f3b03ece12827       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   d1a113afc114c       busybox                                          default
	ac30ed88cef6c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           52 seconds ago      Running             coredns                     0                   b413c97243b20       coredns-5dd5756b68-j8hqh                         kube-system
	831cf76b7844e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   7838aa7b61c77       storage-provisioner                              kube-system
	b2da0d5358c4a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   52e13ca3e88cf       kindnet-w52sl                                    kube-system
	766f51f768df6       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           52 seconds ago      Running             kube-proxy                  0                   f5eb2ad4ed13d       kube-proxy-ndp6f                                 kube-system
	8e7e9bd77bc1f       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           55 seconds ago      Running             kube-apiserver              0                   0f54db842218c       kube-apiserver-old-k8s-version-335655            kube-system
	4f36f52df9e18       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           55 seconds ago      Running             kube-controller-manager     0                   24999571be02a       kube-controller-manager-old-k8s-version-335655   kube-system
	bf66c3337cc33       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           55 seconds ago      Running             etcd                        0                   b9922c3fc040e       etcd-old-k8s-version-335655                      kube-system
	b1fb5f089cc60       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           55 seconds ago      Running             kube-scheduler              0                   445618229cebb       kube-scheduler-old-k8s-version-335655            kube-system
	
	
	==> coredns [ac30ed88cef6c844e51a8a22ea8e22de89811d932dce6e1c6e5cc0b93c9e14b2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58980 - 16494 "HINFO IN 7796113168830110105.6640341912193825824. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065210085s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-335655
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-335655
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=old-k8s-version-335655
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_58_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:58:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-335655
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:00:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:00:15 +0000   Sat, 15 Nov 2025 09:58:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:00:15 +0000   Sat, 15 Nov 2025 09:58:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:00:15 +0000   Sat, 15 Nov 2025 09:58:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:00:15 +0000   Sat, 15 Nov 2025 09:59:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-335655
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                4f251d42-f2ea-4cb6-8ff2-c94beae7a0fe
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-j8hqh                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-old-k8s-version-335655                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m1s
	  kube-system                 kindnet-w52sl                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-335655             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-335655    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-ndp6f                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-335655             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-kplsv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-5wmkv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-335655 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m1s                 kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m1s                 kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s                 kubelet          Node old-k8s-version-335655 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-335655 event: Registered Node old-k8s-version-335655 in Controller
	  Normal  NodeReady                95s                  kubelet          Node old-k8s-version-335655 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 57s)    kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 57s)    kubelet          Node old-k8s-version-335655 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 57s)    kubelet          Node old-k8s-version-335655 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                  node-controller  Node old-k8s-version-335655 event: Registered Node old-k8s-version-335655 in Controller
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [bf66c3337cc33c38b50cd84c0408339ca358893b510f2a8a1222686d78ed613c] <==
	{"level":"info","ts":"2025-11-15T09:59:43.744848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-15T09:59:43.744857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-15T09:59:43.744864Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-15T09:59:43.745832Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-335655 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-15T09:59:43.745845Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T09:59:43.745876Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-15T09:59:43.746096Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-15T09:59:43.746118Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-15T09:59:43.746982Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-15T09:59:43.747037Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-15T10:00:19.675872Z","caller":"traceutil/trace.go:171","msg":"trace[1295718749] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"224.228382ms","start":"2025-11-15T10:00:19.451615Z","end":"2025-11-15T10:00:19.675844Z","steps":["trace[1295718749] 'process raft request'  (duration: 223.951982ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:00:20.012368Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.075165ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597074986390086 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-2q5z9\" mod_revision:569 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-2q5z9\" value_size:1259 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-2q5z9\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:00:20.012797Z","caller":"traceutil/trace.go:171","msg":"trace[1463470652] linearizableReadLoop","detail":"{readStateIndex:678; appliedIndex:676; }","duration":"225.904734ms","start":"2025-11-15T10:00:19.786874Z","end":"2025-11-15T10:00:20.012779Z","steps":["trace[1463470652] 'read index received'  (duration: 86.837318ms)","trace[1463470652] 'applied index is now lower than readState.Index'  (duration: 139.066499ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:00:20.01297Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.105223ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-j8hqh\" ","response":"range_response_count:1 size:4813"}
	{"level":"info","ts":"2025-11-15T10:00:20.013105Z","caller":"traceutil/trace.go:171","msg":"trace[388665250] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-j8hqh; range_end:; response_count:1; response_revision:646; }","duration":"226.216094ms","start":"2025-11-15T10:00:19.786847Z","end":"2025-11-15T10:00:20.013063Z","steps":["trace[388665250] 'agreement among raft nodes before linearized reading'  (duration: 226.014069ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:00:20.013162Z","caller":"traceutil/trace.go:171","msg":"trace[1244868854] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"331.293009ms","start":"2025-11-15T10:00:19.681849Z","end":"2025-11-15T10:00:20.013142Z","steps":["trace[1244868854] 'process raft request'  (duration: 330.816659ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:00:20.013273Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-15T10:00:19.681838Z","time spent":"331.382244ms","remote":"127.0.0.1:55810","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3830,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" mod_revision:570 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" value_size:3770 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" > >"}
	{"level":"info","ts":"2025-11-15T10:00:20.013637Z","caller":"traceutil/trace.go:171","msg":"trace[820207938] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"333.412451ms","start":"2025-11-15T10:00:19.680209Z","end":"2025-11-15T10:00:20.013621Z","steps":["trace[820207938] 'process raft request'  (duration: 193.529301ms)","trace[820207938] 'compare'  (duration: 137.900129ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:00:20.013753Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-15T10:00:19.680194Z","time spent":"333.513607ms","remote":"127.0.0.1:55572","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1318,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-2q5z9\" mod_revision:569 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-2q5z9\" value_size:1259 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-2q5z9\" > >"}
	{"level":"info","ts":"2025-11-15T10:00:20.224633Z","caller":"traceutil/trace.go:171","msg":"trace[1795514066] linearizableReadLoop","detail":"{readStateIndex:679; appliedIndex:678; }","duration":"202.142719ms","start":"2025-11-15T10:00:20.022466Z","end":"2025-11-15T10:00:20.224609Z","steps":["trace[1795514066] 'read index received'  (duration: 139.95997ms)","trace[1795514066] 'applied index is now lower than readState.Index'  (duration: 62.181699ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:00:20.224694Z","caller":"traceutil/trace.go:171","msg":"trace[654756911] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"203.621608ms","start":"2025-11-15T10:00:20.02105Z","end":"2025-11-15T10:00:20.224671Z","steps":["trace[654756911] 'process raft request'  (duration: 141.433387ms)","trace[654756911] 'compare'  (duration: 62.024449ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:00:20.224806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.33661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-old-k8s-version-335655\" ","response":"range_response_count:1 size:5233"}
	{"level":"info","ts":"2025-11-15T10:00:20.224851Z","caller":"traceutil/trace.go:171","msg":"trace[1799056988] range","detail":"{range_begin:/registry/pods/kube-system/etcd-old-k8s-version-335655; range_end:; response_count:1; response_revision:647; }","duration":"202.407849ms","start":"2025-11-15T10:00:20.022434Z","end":"2025-11-15T10:00:20.224842Z","steps":["trace[1799056988] 'agreement among raft nodes before linearized reading'  (duration: 202.294219ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:00:20.583665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.577827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-ndp6f\" ","response":"range_response_count:1 size:4429"}
	{"level":"info","ts":"2025-11-15T10:00:20.583726Z","caller":"traceutil/trace.go:171","msg":"trace[275117519] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-ndp6f; range_end:; response_count:1; response_revision:647; }","duration":"166.656886ms","start":"2025-11-15T10:00:20.417056Z","end":"2025-11-15T10:00:20.583713Z","steps":["trace[275117519] 'range keys from in-memory index tree'  (duration: 166.46298ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:00:38 up  1:42,  0 user,  load average: 2.56, 2.42, 1.71
	Linux old-k8s-version-335655 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b2da0d5358c4a17df789e6829fc9570ec901a6648cf6554feac6498f10accaa1] <==
	I1115 09:59:45.583714       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 09:59:45.583950       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 09:59:45.584144       1 main.go:148] setting mtu 1500 for CNI 
	I1115 09:59:45.584167       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 09:59:45.584197       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T09:59:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 09:59:45.788132       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 09:59:45.788182       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 09:59:45.788199       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 09:59:45.788346       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 09:59:46.082347       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 09:59:46.082371       1 metrics.go:72] Registering metrics
	I1115 09:59:46.082477       1 controller.go:711] "Syncing nftables rules"
	I1115 09:59:55.788017       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 09:59:55.788090       1 main.go:301] handling current node
	I1115 10:00:05.788686       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:00:05.788732       1 main.go:301] handling current node
	I1115 10:00:15.788350       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:00:15.788410       1 main.go:301] handling current node
	I1115 10:00:25.789597       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:00:25.789639       1 main.go:301] handling current node
	I1115 10:00:35.792726       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:00:35.792770       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8e7e9bd77bc1f1f89b796930001c5a1902359d0cf7e181bc548e5bc2a4ee0988] <==
	I1115 09:59:44.648592       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1115 09:59:44.703528       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:59:44.746851       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1115 09:59:44.747281       1 shared_informer.go:318] Caches are synced for configmaps
	I1115 09:59:44.747555       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1115 09:59:44.747573       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 09:59:44.747769       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1115 09:59:44.747783       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1115 09:59:44.748288       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1115 09:59:44.748343       1 aggregator.go:166] initial CRD sync complete...
	I1115 09:59:44.748353       1 autoregister_controller.go:141] Starting autoregister controller
	I1115 09:59:44.748359       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 09:59:44.748366       1 cache.go:39] Caches are synced for autoregister controller
	I1115 09:59:44.751117       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1115 09:59:45.599794       1 controller.go:624] quota admission added evaluator for: namespaces
	I1115 09:59:45.633196       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1115 09:59:45.651159       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 09:59:45.655346       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 09:59:45.666798       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 09:59:45.675259       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1115 09:59:45.717744       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.97.116"}
	I1115 09:59:45.732265       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.97.187"}
	I1115 09:59:57.159282       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 09:59:57.296961       1 controller.go:624] quota admission added evaluator for: endpoints
	I1115 09:59:57.353147       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4f36f52df9e1823d0f8b7fcb1bd85954b910702e8d94abe040010ef7749c5652] <==
	I1115 09:59:57.361178       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1115 09:59:57.361759       1 shared_informer.go:318] Caches are synced for resource quota
	I1115 09:59:57.365015       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1115 09:59:57.375286       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-kplsv"
	I1115 09:59:57.378632       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-5wmkv"
	I1115 09:59:57.414733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.960817ms"
	I1115 09:59:57.420497       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.788231ms"
	I1115 09:59:57.443838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="23.204781ms"
	I1115 09:59:57.445609       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.817µs"
	I1115 09:59:57.450332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.861µs"
	I1115 09:59:57.465956       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="50.092965ms"
	I1115 09:59:57.494893       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="27.899194ms"
	I1115 09:59:57.495016       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="81.619µs"
	I1115 09:59:57.697896       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 09:59:57.766503       1 shared_informer.go:318] Caches are synced for garbage collector
	I1115 09:59:57.766546       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1115 10:00:03.068195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.428304ms"
	I1115 10:00:03.069158       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="84.573µs"
	I1115 10:00:06.069352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="90.172µs"
	I1115 10:00:07.099350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85.531µs"
	I1115 10:00:08.156577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.422µs"
	I1115 10:00:20.014912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="337.546594ms"
	I1115 10:00:20.015719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.848µs"
	I1115 10:00:23.115569       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.681µs"
	I1115 10:00:27.721684       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.063µs"
	
	
	==> kube-proxy [766f51f768df62ae9a4d892911a3e4b3efb88576a90fdfbb7eadf4ae1879169c] <==
	I1115 09:59:45.407809       1 server_others.go:69] "Using iptables proxy"
	I1115 09:59:45.416754       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1115 09:59:45.435241       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:59:45.437772       1 server_others.go:152] "Using iptables Proxier"
	I1115 09:59:45.437804       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1115 09:59:45.437810       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1115 09:59:45.437840       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1115 09:59:45.438066       1 server.go:846] "Version info" version="v1.28.0"
	I1115 09:59:45.438078       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:59:45.438687       1 config.go:188] "Starting service config controller"
	I1115 09:59:45.438759       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1115 09:59:45.438878       1 config.go:97] "Starting endpoint slice config controller"
	I1115 09:59:45.438905       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1115 09:59:45.438928       1 config.go:315] "Starting node config controller"
	I1115 09:59:45.438944       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1115 09:59:45.539640       1 shared_informer.go:318] Caches are synced for service config
	I1115 09:59:45.539684       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1115 09:59:45.539718       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b1fb5f089cc60d72969c503d4ac81cc9dad2cb2197b8fdf047b094dd5609c21c] <==
	I1115 09:59:42.962100       1 serving.go:348] Generated self-signed cert in-memory
	W1115 09:59:44.689109       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1115 09:59:44.689154       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1115 09:59:44.689166       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1115 09:59:44.689176       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1115 09:59:44.707915       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1115 09:59:44.707943       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:59:44.709438       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:59:44.709474       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1115 09:59:44.710450       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1115 09:59:44.710518       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1115 09:59:44.810292       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 15 09:59:57 old-k8s-version-335655 kubelet[730]: I1115 09:59:57.413212     730 topology_manager.go:215] "Topology Admit Handler" podUID="de87fac4-aa42-4aaf-bb60-25d5a7066747" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-5wmkv"
	Nov 15 09:59:57 old-k8s-version-335655 kubelet[730]: I1115 09:59:57.476731     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5z64\" (UniqueName: \"kubernetes.io/projected/e64f38db-81ec-4f14-8452-b6a897366430-kube-api-access-p5z64\") pod \"dashboard-metrics-scraper-5f989dc9cf-kplsv\" (UID: \"e64f38db-81ec-4f14-8452-b6a897366430\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv"
	Nov 15 09:59:57 old-k8s-version-335655 kubelet[730]: I1115 09:59:57.476916     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/de87fac4-aa42-4aaf-bb60-25d5a7066747-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-5wmkv\" (UID: \"de87fac4-aa42-4aaf-bb60-25d5a7066747\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5wmkv"
	Nov 15 09:59:57 old-k8s-version-335655 kubelet[730]: I1115 09:59:57.476955     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjlh6\" (UniqueName: \"kubernetes.io/projected/de87fac4-aa42-4aaf-bb60-25d5a7066747-kube-api-access-kjlh6\") pod \"kubernetes-dashboard-8694d4445c-5wmkv\" (UID: \"de87fac4-aa42-4aaf-bb60-25d5a7066747\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5wmkv"
	Nov 15 09:59:57 old-k8s-version-335655 kubelet[730]: I1115 09:59:57.476999     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e64f38db-81ec-4f14-8452-b6a897366430-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-kplsv\" (UID: \"e64f38db-81ec-4f14-8452-b6a897366430\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv"
	Nov 15 10:00:03 old-k8s-version-335655 kubelet[730]: I1115 10:00:03.057082     730 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5wmkv" podStartSLOduration=1.364265208 podCreationTimestamp="2025-11-15 09:59:57 +0000 UTC" firstStartedPulling="2025-11-15 09:59:57.741547408 +0000 UTC m=+15.861907025" lastFinishedPulling="2025-11-15 10:00:02.434288967 +0000 UTC m=+20.554648565" observedRunningTime="2025-11-15 10:00:03.056536287 +0000 UTC m=+21.176895892" watchObservedRunningTime="2025-11-15 10:00:03.057006748 +0000 UTC m=+21.177366352"
	Nov 15 10:00:06 old-k8s-version-335655 kubelet[730]: I1115 10:00:06.053669     730 scope.go:117] "RemoveContainer" containerID="969ea3bdbf52750bc7261b23a17f1a33a85d43a35b215980722a4853ae83085b"
	Nov 15 10:00:07 old-k8s-version-335655 kubelet[730]: I1115 10:00:07.057175     730 scope.go:117] "RemoveContainer" containerID="969ea3bdbf52750bc7261b23a17f1a33a85d43a35b215980722a4853ae83085b"
	Nov 15 10:00:07 old-k8s-version-335655 kubelet[730]: I1115 10:00:07.057346     730 scope.go:117] "RemoveContainer" containerID="1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91"
	Nov 15 10:00:07 old-k8s-version-335655 kubelet[730]: E1115 10:00:07.057792     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kplsv_kubernetes-dashboard(e64f38db-81ec-4f14-8452-b6a897366430)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv" podUID="e64f38db-81ec-4f14-8452-b6a897366430"
	Nov 15 10:00:08 old-k8s-version-335655 kubelet[730]: I1115 10:00:08.062207     730 scope.go:117] "RemoveContainer" containerID="1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91"
	Nov 15 10:00:08 old-k8s-version-335655 kubelet[730]: E1115 10:00:08.062611     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kplsv_kubernetes-dashboard(e64f38db-81ec-4f14-8452-b6a897366430)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv" podUID="e64f38db-81ec-4f14-8452-b6a897366430"
	Nov 15 10:00:09 old-k8s-version-335655 kubelet[730]: I1115 10:00:09.064698     730 scope.go:117] "RemoveContainer" containerID="1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91"
	Nov 15 10:00:09 old-k8s-version-335655 kubelet[730]: E1115 10:00:09.065326     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kplsv_kubernetes-dashboard(e64f38db-81ec-4f14-8452-b6a897366430)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv" podUID="e64f38db-81ec-4f14-8452-b6a897366430"
	Nov 15 10:00:16 old-k8s-version-335655 kubelet[730]: I1115 10:00:16.082384     730 scope.go:117] "RemoveContainer" containerID="831cf76b7844ee6e290663629081fd160f5eee162570c153fa316a7695614da3"
	Nov 15 10:00:22 old-k8s-version-335655 kubelet[730]: I1115 10:00:22.972543     730 scope.go:117] "RemoveContainer" containerID="1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91"
	Nov 15 10:00:23 old-k8s-version-335655 kubelet[730]: I1115 10:00:23.103968     730 scope.go:117] "RemoveContainer" containerID="1f11a1b5992b240d31e1141593893ccbc5b71c43cde205fcbaf563e70b63be91"
	Nov 15 10:00:23 old-k8s-version-335655 kubelet[730]: I1115 10:00:23.104240     730 scope.go:117] "RemoveContainer" containerID="9da1621ca096e17a0f14c287b16cae84d8be8bc11dc4c323425e15c1db1d75d7"
	Nov 15 10:00:23 old-k8s-version-335655 kubelet[730]: E1115 10:00:23.104655     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kplsv_kubernetes-dashboard(e64f38db-81ec-4f14-8452-b6a897366430)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv" podUID="e64f38db-81ec-4f14-8452-b6a897366430"
	Nov 15 10:00:27 old-k8s-version-335655 kubelet[730]: I1115 10:00:27.712364     730 scope.go:117] "RemoveContainer" containerID="9da1621ca096e17a0f14c287b16cae84d8be8bc11dc4c323425e15c1db1d75d7"
	Nov 15 10:00:27 old-k8s-version-335655 kubelet[730]: E1115 10:00:27.712774     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kplsv_kubernetes-dashboard(e64f38db-81ec-4f14-8452-b6a897366430)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kplsv" podUID="e64f38db-81ec-4f14-8452-b6a897366430"
	Nov 15 10:00:33 old-k8s-version-335655 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:00:33 old-k8s-version-335655 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:00:33 old-k8s-version-335655 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 15 10:00:33 old-k8s-version-335655 systemd[1]: kubelet.service: Consumed 1.552s CPU time.
	
	
	==> kubernetes-dashboard [1b0d120a2950f97fb086bc8728a2fc50b1cc4017835ef14197769a9e88ee301b] <==
	2025/11/15 10:00:02 Using namespace: kubernetes-dashboard
	2025/11/15 10:00:02 Using in-cluster config to connect to apiserver
	2025/11/15 10:00:02 Using secret token for csrf signing
	2025/11/15 10:00:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:00:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:00:02 Successful initial request to the apiserver, version: v1.28.0
	2025/11/15 10:00:02 Generating JWE encryption key
	2025/11/15 10:00:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:00:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:00:02 Initializing JWE encryption key from synchronized object
	2025/11/15 10:00:02 Creating in-cluster Sidecar client
	2025/11/15 10:00:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:00:02 Serving insecurely on HTTP port: 9090
	2025/11/15 10:00:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:00:02 Starting overwatch
	
	
	==> storage-provisioner [831cf76b7844ee6e290663629081fd160f5eee162570c153fa316a7695614da3] <==
	I1115 09:59:45.369700       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:00:15.371932       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a29f139a8109040ea93e6b686169546ab3f7572e5964c616ddfe4b0109c18e09] <==
	I1115 10:00:16.138194       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:00:16.147943       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:00:16.148062       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1115 10:00:33.620469       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:00:33.620598       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40ccfd94-ee2b-478f-91d9-d71b353df891", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-335655_df0d473d-21f1-4464-bb0c-48b6ee93ad04 became leader
	I1115 10:00:33.620662       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-335655_df0d473d-21f1-4464-bb0c-48b6ee93ad04!
	I1115 10:00:33.721198       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-335655_df0d473d-21f1-4464-bb0c-48b6ee93ad04!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335655 -n old-k8s-version-335655
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335655 -n old-k8s-version-335655: exit status 2 (332.346666ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-335655 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-559401 --alsologtostderr -v=1
E1115 10:00:46.554581  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-559401 --alsologtostderr -v=1: exit status 80 (2.347778237s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-559401 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:00:45.444709  613841 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:00:45.445109  613841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:00:45.445121  613841 out.go:374] Setting ErrFile to fd 2...
	I1115 10:00:45.445125  613841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:00:45.445297  613841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:00:45.445560  613841 out.go:368] Setting JSON to false
	I1115 10:00:45.445619  613841 mustload.go:66] Loading cluster: no-preload-559401
	I1115 10:00:45.445953  613841 config.go:182] Loaded profile config "no-preload-559401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:00:45.446316  613841 cli_runner.go:164] Run: docker container inspect no-preload-559401 --format={{.State.Status}}
	I1115 10:00:45.464702  613841 host.go:66] Checking if "no-preload-559401" exists ...
	I1115 10:00:45.464988  613841 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:00:45.521553  613841 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:82 SystemTime:2025-11-15 10:00:45.510611822 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:00:45.522191  613841 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-559401 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:00:45.681870  613841 out.go:179] * Pausing node no-preload-559401 ... 
	I1115 10:00:45.724463  613841 host.go:66] Checking if "no-preload-559401" exists ...
	I1115 10:00:45.724902  613841 ssh_runner.go:195] Run: systemctl --version
	I1115 10:00:45.724977  613841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-559401
	I1115 10:00:45.743056  613841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/no-preload-559401/id_rsa Username:docker}
	I1115 10:00:45.836594  613841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:00:45.849340  613841 pause.go:52] kubelet running: true
	I1115 10:00:45.849424  613841 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:00:46.015413  613841 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:00:46.015532  613841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:00:46.089033  613841 cri.go:89] found id: "331707db8368c603ce36b86038bdf108888e253d75468dcc135df0b0ff652f38"
	I1115 10:00:46.089062  613841 cri.go:89] found id: "d844e71d2c4dcba665557eaabf99f9fdf94b2403dcfc278ac27d957559053a0a"
	I1115 10:00:46.089066  613841 cri.go:89] found id: "7d47a971f6af8ecd8aa0f07da9138293117a44b7e6908c8ae2a89bfb25fb9c01"
	I1115 10:00:46.089069  613841 cri.go:89] found id: "5c2dfc91efbcd6fc8a96bb97ab98fffb24a7769e1d692bc2a99b9906e2394220"
	I1115 10:00:46.089072  613841 cri.go:89] found id: "4577add3597913bbb519bd72d03420f5960399f70606bf8c8d70edd2e1e43538"
	I1115 10:00:46.089074  613841 cri.go:89] found id: "0e0c907536637f4671373a2fb17787378e0cb3601c00f76492ee5288116e81c8"
	I1115 10:00:46.089077  613841 cri.go:89] found id: "8895096ed11812bd45be0812f3ddacb441137c37505fa8846ad04fb1c033843b"
	I1115 10:00:46.089083  613841 cri.go:89] found id: "6ac889c115f00328ac4c19198ba12abd9a0f7d168f55ba530681cda91918cbf8"
	I1115 10:00:46.089085  613841 cri.go:89] found id: "e1a7a97a08ef5ef64767e999edbcfdfc0ad52e1760fecfbba7b4ca857c71ea4b"
	I1115 10:00:46.089097  613841 cri.go:89] found id: "03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672"
	I1115 10:00:46.089100  613841 cri.go:89] found id: "912d49aca42e20b1cb1e878d980787139033be9df50be4c3747a4673ed5b111b"
	I1115 10:00:46.089102  613841 cri.go:89] found id: ""
	I1115 10:00:46.089142  613841 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:00:46.102135  613841 retry.go:31] will retry after 173.645302ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:00:46Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:00:46.276643  613841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:00:46.290605  613841 pause.go:52] kubelet running: false
	I1115 10:00:46.290668  613841 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:00:46.436627  613841 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:00:46.436710  613841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:00:46.505303  613841 cri.go:89] found id: "331707db8368c603ce36b86038bdf108888e253d75468dcc135df0b0ff652f38"
	I1115 10:00:46.505323  613841 cri.go:89] found id: "d844e71d2c4dcba665557eaabf99f9fdf94b2403dcfc278ac27d957559053a0a"
	I1115 10:00:46.505328  613841 cri.go:89] found id: "7d47a971f6af8ecd8aa0f07da9138293117a44b7e6908c8ae2a89bfb25fb9c01"
	I1115 10:00:46.505331  613841 cri.go:89] found id: "5c2dfc91efbcd6fc8a96bb97ab98fffb24a7769e1d692bc2a99b9906e2394220"
	I1115 10:00:46.505334  613841 cri.go:89] found id: "4577add3597913bbb519bd72d03420f5960399f70606bf8c8d70edd2e1e43538"
	I1115 10:00:46.505359  613841 cri.go:89] found id: "0e0c907536637f4671373a2fb17787378e0cb3601c00f76492ee5288116e81c8"
	I1115 10:00:46.505363  613841 cri.go:89] found id: "8895096ed11812bd45be0812f3ddacb441137c37505fa8846ad04fb1c033843b"
	I1115 10:00:46.505367  613841 cri.go:89] found id: "6ac889c115f00328ac4c19198ba12abd9a0f7d168f55ba530681cda91918cbf8"
	I1115 10:00:46.505370  613841 cri.go:89] found id: "e1a7a97a08ef5ef64767e999edbcfdfc0ad52e1760fecfbba7b4ca857c71ea4b"
	I1115 10:00:46.505377  613841 cri.go:89] found id: "03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672"
	I1115 10:00:46.505382  613841 cri.go:89] found id: "912d49aca42e20b1cb1e878d980787139033be9df50be4c3747a4673ed5b111b"
	I1115 10:00:46.505385  613841 cri.go:89] found id: ""
	I1115 10:00:46.505447  613841 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:00:46.517182  613841 retry.go:31] will retry after 373.496027ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:00:46Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:00:46.891612  613841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:00:46.905068  613841 pause.go:52] kubelet running: false
	I1115 10:00:46.905135  613841 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:00:47.048738  613841 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:00:47.048817  613841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:00:47.116651  613841 cri.go:89] found id: "331707db8368c603ce36b86038bdf108888e253d75468dcc135df0b0ff652f38"
	I1115 10:00:47.116673  613841 cri.go:89] found id: "d844e71d2c4dcba665557eaabf99f9fdf94b2403dcfc278ac27d957559053a0a"
	I1115 10:00:47.116677  613841 cri.go:89] found id: "7d47a971f6af8ecd8aa0f07da9138293117a44b7e6908c8ae2a89bfb25fb9c01"
	I1115 10:00:47.116680  613841 cri.go:89] found id: "5c2dfc91efbcd6fc8a96bb97ab98fffb24a7769e1d692bc2a99b9906e2394220"
	I1115 10:00:47.116683  613841 cri.go:89] found id: "4577add3597913bbb519bd72d03420f5960399f70606bf8c8d70edd2e1e43538"
	I1115 10:00:47.116686  613841 cri.go:89] found id: "0e0c907536637f4671373a2fb17787378e0cb3601c00f76492ee5288116e81c8"
	I1115 10:00:47.116689  613841 cri.go:89] found id: "8895096ed11812bd45be0812f3ddacb441137c37505fa8846ad04fb1c033843b"
	I1115 10:00:47.116691  613841 cri.go:89] found id: "6ac889c115f00328ac4c19198ba12abd9a0f7d168f55ba530681cda91918cbf8"
	I1115 10:00:47.116694  613841 cri.go:89] found id: "e1a7a97a08ef5ef64767e999edbcfdfc0ad52e1760fecfbba7b4ca857c71ea4b"
	I1115 10:00:47.116699  613841 cri.go:89] found id: "03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672"
	I1115 10:00:47.116702  613841 cri.go:89] found id: "912d49aca42e20b1cb1e878d980787139033be9df50be4c3747a4673ed5b111b"
	I1115 10:00:47.116704  613841 cri.go:89] found id: ""
	I1115 10:00:47.116741  613841 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:00:47.128978  613841 retry.go:31] will retry after 313.957574ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:00:47Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:00:47.443541  613841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:00:47.456961  613841 pause.go:52] kubelet running: false
	I1115 10:00:47.457041  613841 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:00:47.626467  613841 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:00:47.626577  613841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:00:47.705513  613841 cri.go:89] found id: "331707db8368c603ce36b86038bdf108888e253d75468dcc135df0b0ff652f38"
	I1115 10:00:47.705545  613841 cri.go:89] found id: "d844e71d2c4dcba665557eaabf99f9fdf94b2403dcfc278ac27d957559053a0a"
	I1115 10:00:47.705550  613841 cri.go:89] found id: "7d47a971f6af8ecd8aa0f07da9138293117a44b7e6908c8ae2a89bfb25fb9c01"
	I1115 10:00:47.705553  613841 cri.go:89] found id: "5c2dfc91efbcd6fc8a96bb97ab98fffb24a7769e1d692bc2a99b9906e2394220"
	I1115 10:00:47.705556  613841 cri.go:89] found id: "4577add3597913bbb519bd72d03420f5960399f70606bf8c8d70edd2e1e43538"
	I1115 10:00:47.705560  613841 cri.go:89] found id: "0e0c907536637f4671373a2fb17787378e0cb3601c00f76492ee5288116e81c8"
	I1115 10:00:47.705563  613841 cri.go:89] found id: "8895096ed11812bd45be0812f3ddacb441137c37505fa8846ad04fb1c033843b"
	I1115 10:00:47.705565  613841 cri.go:89] found id: "6ac889c115f00328ac4c19198ba12abd9a0f7d168f55ba530681cda91918cbf8"
	I1115 10:00:47.705568  613841 cri.go:89] found id: "e1a7a97a08ef5ef64767e999edbcfdfc0ad52e1760fecfbba7b4ca857c71ea4b"
	I1115 10:00:47.705581  613841 cri.go:89] found id: "03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672"
	I1115 10:00:47.705584  613841 cri.go:89] found id: "912d49aca42e20b1cb1e878d980787139033be9df50be4c3747a4673ed5b111b"
	I1115 10:00:47.705586  613841 cri.go:89] found id: ""
	I1115 10:00:47.705626  613841 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:00:47.720570  613841 out.go:203] 
	W1115 10:00:47.721983  613841 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:00:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:00:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:00:47.722006  613841 out.go:285] * 
	* 
	W1115 10:00:47.726890  613841 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:00:47.728257  613841 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-559401 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-559401
helpers_test.go:243: (dbg) docker inspect no-preload-559401:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e",
	        "Created": "2025-11-15T09:58:30.798243596Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 603315,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:59:50.38857041Z",
	            "FinishedAt": "2025-11-15T09:59:49.493298356Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e/hostname",
	        "HostsPath": "/var/lib/docker/containers/96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e/hosts",
	        "LogPath": "/var/lib/docker/containers/96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e/96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e-json.log",
	        "Name": "/no-preload-559401",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-559401:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-559401",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e",
	                "LowerDir": "/var/lib/docker/overlay2/164a1a1235fac955785744348ec2ac413956b6a413469f5c1a071ecc18e0b87f-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/164a1a1235fac955785744348ec2ac413956b6a413469f5c1a071ecc18e0b87f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/164a1a1235fac955785744348ec2ac413956b6a413469f5c1a071ecc18e0b87f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/164a1a1235fac955785744348ec2ac413956b6a413469f5c1a071ecc18e0b87f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-559401",
	                "Source": "/var/lib/docker/volumes/no-preload-559401/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-559401",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-559401",
	                "name.minikube.sigs.k8s.io": "no-preload-559401",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5c955cf39ff6262eb66cd5c9fd33f8a1ba0045fc3bc136e92f4931aa5c42e101",
	            "SandboxKey": "/var/run/docker/netns/5c955cf39ff6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-559401": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9778bfb33840535be1dad946c45c61cf82a33a723dc88bd05e11d71cf2fc0a9f",
	                    "EndpointID": "a48023370a0cbddea144ed8bf93d19cbd309c55de0c990ed0844a839533b7528",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "5e:cf:b8:c1:17:94",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-559401",
	                        "96bf94e265be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-559401 -n no-preload-559401
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-559401 -n no-preload-559401: exit status 2 (355.58855ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-559401 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-559401 logs -n 25: (1.216275702s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-335655 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:59 UTC │
	│ ssh     │ cert-options-759344 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-759344          │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ ssh     │ -p cert-options-759344 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-759344          │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ delete  │ -p cert-options-759344                                                                                                                                                                                                                        │ cert-options-759344          │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:59 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-335655 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │                     │
	│ stop    │ -p old-k8s-version-335655 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-559401 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │                     │
	│ stop    │ -p no-preload-559401 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-335655 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ start   │ -p old-k8s-version-335655 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 10:00 UTC │
	│ addons  │ enable dashboard -p no-preload-559401 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ start   │ -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-405833    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ start   │ -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-405833    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p kubernetes-upgrade-405833                                                                                                                                                                                                                  │ kubernetes-upgrade-405833    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p embed-certs-430513 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ image   │ old-k8s-version-335655 image list --format=json                                                                                                                                                                                               │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ pause   │ -p old-k8s-version-335655 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ delete  │ -p old-k8s-version-335655                                                                                                                                                                                                                     │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p old-k8s-version-335655                                                                                                                                                                                                                     │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p disable-driver-mounts-553319                                                                                                                                                                                                               │ disable-driver-mounts-553319 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p default-k8s-diff-port-679865 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ image   │ no-preload-559401 image list --format=json                                                                                                                                                                                                    │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ pause   │ -p no-preload-559401 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:00:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:00:42.348267  613222 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:00:42.348585  613222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:00:42.348597  613222 out.go:374] Setting ErrFile to fd 2...
	I1115 10:00:42.348603  613222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:00:42.348895  613222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:00:42.349540  613222 out.go:368] Setting JSON to false
	I1115 10:00:42.351147  613222 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6183,"bootTime":1763194659,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:00:42.351242  613222 start.go:143] virtualization: kvm guest
	I1115 10:00:42.353315  613222 out.go:179] * [default-k8s-diff-port-679865] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:00:42.354733  613222 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:00:42.354740  613222 notify.go:221] Checking for updates...
	I1115 10:00:42.357282  613222 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:00:42.358849  613222 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:00:42.360922  613222 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 10:00:42.362131  613222 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:00:42.364287  613222 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:00:42.368050  613222 config.go:182] Loaded profile config "cert-expiration-341243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:00:42.368198  613222 config.go:182] Loaded profile config "embed-certs-430513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:00:42.368317  613222 config.go:182] Loaded profile config "no-preload-559401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:00:42.368465  613222 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:00:42.393995  613222 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:00:42.394106  613222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:00:42.453363  613222 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:00:42.442303723 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:00:42.453518  613222 docker.go:319] overlay module found
	I1115 10:00:42.455464  613222 out.go:179] * Using the docker driver based on user configuration
	I1115 10:00:42.456699  613222 start.go:309] selected driver: docker
	I1115 10:00:42.456719  613222 start.go:930] validating driver "docker" against <nil>
	I1115 10:00:42.456734  613222 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:00:42.457333  613222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:00:42.514470  613222 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:00:42.504670414 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:00:42.514625  613222 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:00:42.514863  613222 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:00:42.516631  613222 out.go:179] * Using Docker driver with root privileges
	I1115 10:00:42.517811  613222 cni.go:84] Creating CNI manager for ""
	I1115 10:00:42.517871  613222 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:00:42.517881  613222 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:00:42.517959  613222 start.go:353] cluster config:
	{Name:default-k8s-diff-port-679865 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-679865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:00:42.519340  613222 out.go:179] * Starting "default-k8s-diff-port-679865" primary control-plane node in "default-k8s-diff-port-679865" cluster
	I1115 10:00:42.520518  613222 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:00:42.522088  613222 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:00:42.523305  613222 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:00:42.523347  613222 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:00:42.523365  613222 cache.go:65] Caching tarball of preloaded images
	I1115 10:00:42.523425  613222 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:00:42.523508  613222 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:00:42.523526  613222 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:00:42.523639  613222 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/default-k8s-diff-port-679865/config.json ...
	I1115 10:00:42.523667  613222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/default-k8s-diff-port-679865/config.json: {Name:mkd225497c5387e10afe68a2a3044f4b4cc1bc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:42.544548  613222 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:00:42.544575  613222 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:00:42.544595  613222 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:00:42.544626  613222 start.go:360] acquireMachinesLock for default-k8s-diff-port-679865: {Name:mke1c48082f838819f77221e2758b30fa6645123 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:00:42.544742  613222 start.go:364] duration metric: took 95.537µs to acquireMachinesLock for "default-k8s-diff-port-679865"
	I1115 10:00:42.544773  613222 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-679865 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-679865 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:00:42.544887  613222 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:00:41.902678  608059 addons.go:515] duration metric: took 531.970039ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:00:42.180648  608059 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-430513" context rescaled to 1 replicas
	W1115 10:00:43.679657  608059 node_ready.go:57] node "embed-certs-430513" has "Ready":"False" status (will retry)
	I1115 10:00:42.547477  613222 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:00:42.547733  613222 start.go:159] libmachine.API.Create for "default-k8s-diff-port-679865" (driver="docker")
	I1115 10:00:42.547770  613222 client.go:173] LocalClient.Create starting
	I1115 10:00:42.547884  613222 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem
	I1115 10:00:42.547925  613222 main.go:143] libmachine: Decoding PEM data...
	I1115 10:00:42.547948  613222 main.go:143] libmachine: Parsing certificate...
	I1115 10:00:42.548038  613222 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem
	I1115 10:00:42.548071  613222 main.go:143] libmachine: Decoding PEM data...
	I1115 10:00:42.548085  613222 main.go:143] libmachine: Parsing certificate...
	I1115 10:00:42.548469  613222 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-679865 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:00:42.567178  613222 cli_runner.go:211] docker network inspect default-k8s-diff-port-679865 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:00:42.567257  613222 network_create.go:284] running [docker network inspect default-k8s-diff-port-679865] to gather additional debugging logs...
	I1115 10:00:42.567281  613222 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-679865
	W1115 10:00:42.584386  613222 cli_runner.go:211] docker network inspect default-k8s-diff-port-679865 returned with exit code 1
	I1115 10:00:42.584438  613222 network_create.go:287] error running [docker network inspect default-k8s-diff-port-679865]: docker network inspect default-k8s-diff-port-679865: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-679865 not found
	I1115 10:00:42.584471  613222 network_create.go:289] output of [docker network inspect default-k8s-diff-port-679865]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-679865 not found
	
	** /stderr **
	I1115 10:00:42.584573  613222 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:00:42.603040  613222 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a8fb985664d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:ab:70:dd:9f:65} reservation:<nil>}
	I1115 10:00:42.603906  613222 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cc9c79f9c19e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:9a:52:90:2e:14} reservation:<nil>}
	I1115 10:00:42.604444  613222 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-309565720ebf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:66:38:13:6a:5d} reservation:<nil>}
	I1115 10:00:42.605169  613222 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b5a35f2144e5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:aa:c4:ce:f8:c4} reservation:<nil>}
	I1115 10:00:42.606050  613222 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d80ed0}
	I1115 10:00:42.606077  613222 network_create.go:124] attempt to create docker network default-k8s-diff-port-679865 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1115 10:00:42.606138  613222 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-679865 default-k8s-diff-port-679865
	I1115 10:00:42.656112  613222 network_create.go:108] docker network default-k8s-diff-port-679865 192.168.85.0/24 created
	I1115 10:00:42.656154  613222 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-679865" container
	I1115 10:00:42.656235  613222 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:00:42.676920  613222 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-679865 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-679865 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:00:42.696061  613222 oci.go:103] Successfully created a docker volume default-k8s-diff-port-679865
	I1115 10:00:42.696250  613222 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-679865-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-679865 --entrypoint /usr/bin/test -v default-k8s-diff-port-679865:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:00:43.096008  613222 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-679865
	I1115 10:00:43.096073  613222 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:00:43.096086  613222 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:00:43.096157  613222 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-679865:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 15 10:00:11 no-preload-559401 crio[567]: time="2025-11-15T10:00:11.305636209Z" level=info msg="Created container 912d49aca42e20b1cb1e878d980787139033be9df50be4c3747a4673ed5b111b: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nhbwb/kubernetes-dashboard" id=7bc4a7ae-30a2-4fad-8dfb-e9721631d2eb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:11 no-preload-559401 crio[567]: time="2025-11-15T10:00:11.306331641Z" level=info msg="Starting container: 912d49aca42e20b1cb1e878d980787139033be9df50be4c3747a4673ed5b111b" id=58ea131e-3e3c-4a69-94a0-5020a1f100d8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:00:11 no-preload-559401 crio[567]: time="2025-11-15T10:00:11.308429897Z" level=info msg="Started container" PID=1726 containerID=912d49aca42e20b1cb1e878d980787139033be9df50be4c3747a4673ed5b111b description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nhbwb/kubernetes-dashboard id=58ea131e-3e3c-4a69-94a0-5020a1f100d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc3e3c412fd0f9c1ed1e9cca18e469ffd3a5a927eb16328d9557b376216734cf
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.637973682Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1ae8199d-94d5-47fa-953a-cff7d8dbebb5 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.641326757Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fd005ac3-828c-47c4-8fd3-4a35b41b7c68 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.644690213Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb/dashboard-metrics-scraper" id=fc2025df-2ccd-487a-9438-9a51dfdbb4ed name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.644843428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.651995675Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.652571057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.683846145Z" level=info msg="Created container 03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb/dashboard-metrics-scraper" id=fc2025df-2ccd-487a-9438-9a51dfdbb4ed name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.684609437Z" level=info msg="Starting container: 03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672" id=948c539e-6cb9-4e8b-9a37-fdf81533dbea name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.687253616Z" level=info msg="Started container" PID=1744 containerID=03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb/dashboard-metrics-scraper id=948c539e-6cb9-4e8b-9a37-fdf81533dbea name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0d99edc61b619ceb945a31f3b74de01f1801ecd121ffff9178bec94a8ad6aa0
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.772378772Z" level=info msg="Removing container: 8554460fc29ffc07ad5f3396ac4fbb137674d7b675cb743c0b0a69912bcbb2e4" id=f1fa5cc9-f52d-47e6-b8d2-e4a7b608253c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.78249811Z" level=info msg="Removed container 8554460fc29ffc07ad5f3396ac4fbb137674d7b675cb743c0b0a69912bcbb2e4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb/dashboard-metrics-scraper" id=f1fa5cc9-f52d-47e6-b8d2-e4a7b608253c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.788035831Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e31520fb-6bee-408f-b690-f5f24708257d name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.788974763Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=503a1b34-1150-4c54-a700-d5d5d26a2580 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.789969232Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f9213084-9722-4343-a66b-fa5e5b5eb561 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.790126256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.794803059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.795002399Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/86a0aa26ac914873b22dbf5b0bc2bc7b83c0f92de1b4b410b586a2c2c0304b70/merged/etc/passwd: no such file or directory"
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.795038429Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/86a0aa26ac914873b22dbf5b0bc2bc7b83c0f92de1b4b410b586a2c2c0304b70/merged/etc/group: no such file or directory"
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.795334931Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.827001432Z" level=info msg="Created container 331707db8368c603ce36b86038bdf108888e253d75468dcc135df0b0ff652f38: kube-system/storage-provisioner/storage-provisioner" id=f9213084-9722-4343-a66b-fa5e5b5eb561 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.827678615Z" level=info msg="Starting container: 331707db8368c603ce36b86038bdf108888e253d75468dcc135df0b0ff652f38" id=15cef3ec-c069-46a0-86c4-71f018e20ee3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.829834304Z" level=info msg="Started container" PID=1758 containerID=331707db8368c603ce36b86038bdf108888e253d75468dcc135df0b0ff652f38 description=kube-system/storage-provisioner/storage-provisioner id=15cef3ec-c069-46a0-86c4-71f018e20ee3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=321937c3bac5713836616b940c92c0cd46d921bbe68c4713db8c4c068a57b5ac
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	331707db8368c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   321937c3bac57       storage-provisioner                          kube-system
	03e38d186cb33       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   f0d99edc61b61       dashboard-metrics-scraper-6ffb444bf9-vn2wb   kubernetes-dashboard
	912d49aca42e2       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   bc3e3c412fd0f       kubernetes-dashboard-855c9754f9-nhbwb        kubernetes-dashboard
	81fd5af3b453a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   b7d7a7fe4b86e       busybox                                      default
	d844e71d2c4dc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   6a2744f7b4898       kindnet-b5x55                                kube-system
	7d47a971f6af8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   97d88ea443454       coredns-66bc5c9577-dh55n                     kube-system
	5c2dfc91efbcd       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           48 seconds ago      Running             kube-proxy                  0                   d7713abd7591c       kube-proxy-sbk5r                             kube-system
	4577add359791       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   321937c3bac57       storage-provisioner                          kube-system
	0e0c907536637       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   bb8bd7fc1e620       etcd-no-preload-559401                       kube-system
	8895096ed1181       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   e11eae2b9fc32       kube-apiserver-no-preload-559401             kube-system
	6ac889c115f00       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   fcfa66159fe7d       kube-scheduler-no-preload-559401             kube-system
	e1a7a97a08ef5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   5338fb0e0de1a       kube-controller-manager-no-preload-559401    kube-system
	
	
	==> coredns [7d47a971f6af8ecd8aa0f07da9138293117a44b7e6908c8ae2a89bfb25fb9c01] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60249 - 56676 "HINFO IN 2994477425395392846.3722443190762492695. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020812239s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-559401
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-559401
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=no-preload-559401
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_59_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:58:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-559401
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:00:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:00:30 +0000   Sat, 15 Nov 2025 09:58:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:00:30 +0000   Sat, 15 Nov 2025 09:58:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:00:30 +0000   Sat, 15 Nov 2025 09:58:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:00:30 +0000   Sat, 15 Nov 2025 09:59:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-559401
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                952f299f-14db-4c2b-b6e4-27ef9280d1fa
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 coredns-66bc5c9577-dh55n                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-no-preload-559401                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-b5x55                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	  kube-system                 kube-apiserver-no-preload-559401              250m (3%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-no-preload-559401     200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-sbk5r                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-no-preload-559401              100m (1%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vn2wb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-nhbwb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 100s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  107s               kubelet          Node no-preload-559401 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s               kubelet          Node no-preload-559401 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s               kubelet          Node no-preload-559401 status is now: NodeHasSufficientPID
	  Normal  Starting                 107s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           102s               node-controller  Node no-preload-559401 event: Registered Node no-preload-559401 in Controller
	  Normal  NodeReady                88s                kubelet          Node no-preload-559401 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node no-preload-559401 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node no-preload-559401 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node no-preload-559401 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node no-preload-559401 event: Registered Node no-preload-559401 in Controller
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [0e0c907536637f4671373a2fb17787378e0cb3601c00f76492ee5288116e81c8] <==
	{"level":"warn","ts":"2025-11-15T09:59:58.322863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.329238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.336725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.344642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.352823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.365472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.377588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.382010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.391854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.425559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.432976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.444951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.453718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.525176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57764","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T10:00:07.863987Z","caller":"traceutil/trace.go:172","msg":"trace[388600050] transaction","detail":"{read_only:false; response_revision:574; number_of_response:1; }","duration":"139.810313ms","start":"2025-11-15T10:00:07.724144Z","end":"2025-11-15T10:00:07.863955Z","steps":["trace[388600050] 'process raft request'  (duration: 87.498788ms)","trace[388600050] 'compare'  (duration: 52.172882ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:00:19.870665Z","caller":"traceutil/trace.go:172","msg":"trace[1819233708] linearizableReadLoop","detail":"{readStateIndex:632; appliedIndex:632; }","duration":"129.082882ms","start":"2025-11-15T10:00:19.741558Z","end":"2025-11-15T10:00:19.870641Z","steps":["trace[1819233708] 'read index received'  (duration: 129.072692ms)","trace[1819233708] 'applied index is now lower than readState.Index'  (duration: 8.935µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:00:20.012095Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"302.974995ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-15T10:00:20.012172Z","caller":"traceutil/trace.go:172","msg":"trace[837370168] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:599; }","duration":"303.065083ms","start":"2025-11-15T10:00:19.709094Z","end":"2025-11-15T10:00:20.012159Z","steps":["trace[837370168] 'agreement among raft nodes before linearized reading'  (duration: 161.636998ms)","trace[837370168] 'range keys from in-memory index tree'  (duration: 141.321189ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:00:20.012881Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.558123ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790031518912284 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ndyt7xoch7sqiwsj3fyujicvdm\" mod_revision:579 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ndyt7xoch7sqiwsj3fyujicvdm\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ndyt7xoch7sqiwsj3fyujicvdm\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:00:20.012974Z","caller":"traceutil/trace.go:172","msg":"trace[1102186639] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"331.341649ms","start":"2025-11-15T10:00:19.681615Z","end":"2025-11-15T10:00:20.012956Z","steps":["trace[1102186639] 'process raft request'  (duration: 189.108961ms)","trace[1102186639] 'compare'  (duration: 141.239371ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:00:20.013043Z","caller":"traceutil/trace.go:172","msg":"trace[856752022] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"248.946417ms","start":"2025-11-15T10:00:19.764086Z","end":"2025-11-15T10:00:20.013033Z","steps":["trace[856752022] 'process raft request'  (duration: 248.875267ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:00:20.013178Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"268.581135ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-559401\" limit:1 ","response":"range_response_count:1 size:4876"}
	{"level":"info","ts":"2025-11-15T10:00:20.013200Z","caller":"traceutil/trace.go:172","msg":"trace[95401798] range","detail":"{range_begin:/registry/minions/no-preload-559401; range_end:; response_count:1; response_revision:601; }","duration":"268.606514ms","start":"2025-11-15T10:00:19.744588Z","end":"2025-11-15T10:00:20.013194Z","steps":["trace[95401798] 'agreement among raft nodes before linearized reading'  (duration: 268.548448ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:00:20.013072Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T10:00:19.681594Z","time spent":"331.42897ms","remote":"127.0.0.1:57090","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ndyt7xoch7sqiwsj3fyujicvdm\" mod_revision:579 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ndyt7xoch7sqiwsj3fyujicvdm\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ndyt7xoch7sqiwsj3fyujicvdm\" > >"}
	{"level":"info","ts":"2025-11-15T10:00:20.013122Z","caller":"traceutil/trace.go:172","msg":"trace[1509135871] linearizableReadLoop","detail":"{readStateIndex:633; appliedIndex:632; }","duration":"142.375897ms","start":"2025-11-15T10:00:19.870736Z","end":"2025-11-15T10:00:20.013112Z","steps":["trace[1509135871] 'read index received'  (duration: 23.436059ms)","trace[1509135871] 'applied index is now lower than readState.Index'  (duration: 118.938961ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:00:49 up  1:43,  0 user,  load average: 2.64, 2.44, 1.72
	Linux no-preload-559401 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d844e71d2c4dcba665557eaabf99f9fdf94b2403dcfc278ac27d957559053a0a] <==
	I1115 10:00:00.381286       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:00:00.381649       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1115 10:00:00.381958       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:00:00.382020       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:00:00.382065       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:00:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:00:00.626161       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:00:00.781093       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:00:00.781116       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:00:00.781287       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:00:00.981342       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:00:00.981371       1 metrics.go:72] Registering metrics
	I1115 10:00:00.981459       1 controller.go:711] "Syncing nftables rules"
	I1115 10:00:10.625668       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 10:00:10.625751       1 main.go:301] handling current node
	I1115 10:00:20.625667       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 10:00:20.625710       1 main.go:301] handling current node
	I1115 10:00:30.626022       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 10:00:30.626070       1 main.go:301] handling current node
	I1115 10:00:40.632500       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 10:00:40.632529       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8895096ed11812bd45be0812f3ddacb441137c37505fa8846ad04fb1c033843b] <==
	I1115 09:59:59.273171       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 09:59:59.274102       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 09:59:59.274651       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 09:59:59.274846       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 09:59:59.284284       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 09:59:59.288719       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 09:59:59.288924       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 09:59:59.288952       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 09:59:59.299701       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 09:59:59.305676       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 09:59:59.314652       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 09:59:59.314753       1 policy_source.go:240] refreshing policies
	I1115 09:59:59.328509       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 09:59:59.343018       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:59:59.686164       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 09:59:59.737104       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 09:59:59.744884       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 09:59:59.784123       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 09:59:59.800864       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 09:59:59.882529       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.226.220"}
	I1115 09:59:59.900144       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.203.15"}
	I1115 10:00:00.083597       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:00:02.558928       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:00:03.008929       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:00:03.062163       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e1a7a97a08ef5ef64767e999edbcfdfc0ad52e1760fecfbba7b4ca857c71ea4b] <==
	I1115 10:00:02.555796       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:00:02.555810       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:00:02.555860       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:00:02.555900       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:00:02.555945       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:00:02.556028       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:00:02.556741       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:00:02.556809       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:00:02.557191       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:00:02.557244       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:00:02.559734       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:00:02.559839       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:00:02.560292       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:00:02.560480       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-559401"
	I1115 10:00:02.560533       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 10:00:02.562487       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:00:02.563257       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:00:02.565199       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:00:02.565260       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:00:02.567354       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:00:02.574584       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:00:02.574602       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:00:02.574611       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:00:02.577318       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:00:02.587979       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5c2dfc91efbcd6fc8a96bb97ab98fffb24a7769e1d692bc2a99b9906e2394220] <==
	I1115 10:00:00.160058       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:00:00.232804       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:00:00.333040       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:00:00.333151       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1115 10:00:00.333284       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:00:00.364517       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:00:00.364659       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:00:00.388021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:00:00.388348       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:00:00.388650       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:00:00.391021       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:00:00.391045       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:00:00.391081       1 config.go:200] "Starting service config controller"
	I1115 10:00:00.391093       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:00:00.391113       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:00:00.391124       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:00:00.391159       1 config.go:309] "Starting node config controller"
	I1115 10:00:00.391197       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:00:00.391222       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:00:00.491216       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:00:00.491340       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:00:00.491416       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6ac889c115f00328ac4c19198ba12abd9a0f7d168f55ba530681cda91918cbf8] <==
	I1115 09:59:58.604433       1 serving.go:386] Generated self-signed cert in-memory
	I1115 10:00:00.146972       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:00:00.148665       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:00:00.156005       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 10:00:00.156191       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 10:00:00.156024       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:00:00.156018       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:00:00.157944       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:00:00.157963       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:00:00.165226       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:00:00.165507       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:00:00.257487       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 10:00:00.258643       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:00:00.258557       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:00:03 no-preload-559401 kubelet[716]: I1115 10:00:03.258019     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2d9ce6e2-8199-4088-ad8b-2678ace0fb0a-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vn2wb\" (UID: \"2d9ce6e2-8199-4088-ad8b-2678ace0fb0a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb"
	Nov 15 10:00:03 no-preload-559401 kubelet[716]: I1115 10:00:03.258049     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6pq9\" (UniqueName: \"kubernetes.io/projected/2d9ce6e2-8199-4088-ad8b-2678ace0fb0a-kube-api-access-h6pq9\") pod \"dashboard-metrics-scraper-6ffb444bf9-vn2wb\" (UID: \"2d9ce6e2-8199-4088-ad8b-2678ace0fb0a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb"
	Nov 15 10:00:03 no-preload-559401 kubelet[716]: I1115 10:00:03.258095     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd6kq\" (UniqueName: \"kubernetes.io/projected/b2804b3e-3418-4b75-93a0-a568ca6de288-kube-api-access-rd6kq\") pod \"kubernetes-dashboard-855c9754f9-nhbwb\" (UID: \"b2804b3e-3418-4b75-93a0-a568ca6de288\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nhbwb"
	Nov 15 10:00:06 no-preload-559401 kubelet[716]: I1115 10:00:06.716713     716 scope.go:117] "RemoveContainer" containerID="770fbd283a29a9c5353934276fbf6dd9103402264f4fbdfb1661304eb99998d0"
	Nov 15 10:00:07 no-preload-559401 kubelet[716]: I1115 10:00:07.721609     716 scope.go:117] "RemoveContainer" containerID="770fbd283a29a9c5353934276fbf6dd9103402264f4fbdfb1661304eb99998d0"
	Nov 15 10:00:07 no-preload-559401 kubelet[716]: I1115 10:00:07.721765     716 scope.go:117] "RemoveContainer" containerID="8554460fc29ffc07ad5f3396ac4fbb137674d7b675cb743c0b0a69912bcbb2e4"
	Nov 15 10:00:07 no-preload-559401 kubelet[716]: E1115 10:00:07.721953     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn2wb_kubernetes-dashboard(2d9ce6e2-8199-4088-ad8b-2678ace0fb0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb" podUID="2d9ce6e2-8199-4088-ad8b-2678ace0fb0a"
	Nov 15 10:00:08 no-preload-559401 kubelet[716]: I1115 10:00:08.727479     716 scope.go:117] "RemoveContainer" containerID="8554460fc29ffc07ad5f3396ac4fbb137674d7b675cb743c0b0a69912bcbb2e4"
	Nov 15 10:00:08 no-preload-559401 kubelet[716]: E1115 10:00:08.727639     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn2wb_kubernetes-dashboard(2d9ce6e2-8199-4088-ad8b-2678ace0fb0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb" podUID="2d9ce6e2-8199-4088-ad8b-2678ace0fb0a"
	Nov 15 10:00:09 no-preload-559401 kubelet[716]: I1115 10:00:09.730080     716 scope.go:117] "RemoveContainer" containerID="8554460fc29ffc07ad5f3396ac4fbb137674d7b675cb743c0b0a69912bcbb2e4"
	Nov 15 10:00:09 no-preload-559401 kubelet[716]: E1115 10:00:09.730279     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn2wb_kubernetes-dashboard(2d9ce6e2-8199-4088-ad8b-2678ace0fb0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb" podUID="2d9ce6e2-8199-4088-ad8b-2678ace0fb0a"
	Nov 15 10:00:14 no-preload-559401 kubelet[716]: I1115 10:00:14.090535     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nhbwb" podStartSLOduration=3.35800339 podStartE2EDuration="11.090504427s" podCreationTimestamp="2025-11-15 10:00:03 +0000 UTC" firstStartedPulling="2025-11-15 10:00:03.532819644 +0000 UTC m=+6.987891833" lastFinishedPulling="2025-11-15 10:00:11.26532068 +0000 UTC m=+14.720392870" observedRunningTime="2025-11-15 10:00:11.752328639 +0000 UTC m=+15.207400844" watchObservedRunningTime="2025-11-15 10:00:14.090504427 +0000 UTC m=+17.545576633"
	Nov 15 10:00:24 no-preload-559401 kubelet[716]: I1115 10:00:24.637407     716 scope.go:117] "RemoveContainer" containerID="8554460fc29ffc07ad5f3396ac4fbb137674d7b675cb743c0b0a69912bcbb2e4"
	Nov 15 10:00:24 no-preload-559401 kubelet[716]: I1115 10:00:24.770610     716 scope.go:117] "RemoveContainer" containerID="8554460fc29ffc07ad5f3396ac4fbb137674d7b675cb743c0b0a69912bcbb2e4"
	Nov 15 10:00:24 no-preload-559401 kubelet[716]: I1115 10:00:24.770836     716 scope.go:117] "RemoveContainer" containerID="03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672"
	Nov 15 10:00:24 no-preload-559401 kubelet[716]: E1115 10:00:24.771029     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn2wb_kubernetes-dashboard(2d9ce6e2-8199-4088-ad8b-2678ace0fb0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb" podUID="2d9ce6e2-8199-4088-ad8b-2678ace0fb0a"
	Nov 15 10:00:27 no-preload-559401 kubelet[716]: I1115 10:00:27.755782     716 scope.go:117] "RemoveContainer" containerID="03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672"
	Nov 15 10:00:27 no-preload-559401 kubelet[716]: E1115 10:00:27.756017     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn2wb_kubernetes-dashboard(2d9ce6e2-8199-4088-ad8b-2678ace0fb0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb" podUID="2d9ce6e2-8199-4088-ad8b-2678ace0fb0a"
	Nov 15 10:00:30 no-preload-559401 kubelet[716]: I1115 10:00:30.787631     716 scope.go:117] "RemoveContainer" containerID="4577add3597913bbb519bd72d03420f5960399f70606bf8c8d70edd2e1e43538"
	Nov 15 10:00:40 no-preload-559401 kubelet[716]: I1115 10:00:40.639127     716 scope.go:117] "RemoveContainer" containerID="03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672"
	Nov 15 10:00:40 no-preload-559401 kubelet[716]: E1115 10:00:40.639342     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn2wb_kubernetes-dashboard(2d9ce6e2-8199-4088-ad8b-2678ace0fb0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb" podUID="2d9ce6e2-8199-4088-ad8b-2678ace0fb0a"
	Nov 15 10:00:46 no-preload-559401 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:00:46 no-preload-559401 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:00:46 no-preload-559401 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 15 10:00:46 no-preload-559401 systemd[1]: kubelet.service: Consumed 1.638s CPU time.
	
	
	==> kubernetes-dashboard [912d49aca42e20b1cb1e878d980787139033be9df50be4c3747a4673ed5b111b] <==
	2025/11/15 10:00:11 Using namespace: kubernetes-dashboard
	2025/11/15 10:00:11 Using in-cluster config to connect to apiserver
	2025/11/15 10:00:11 Using secret token for csrf signing
	2025/11/15 10:00:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:00:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:00:11 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:00:11 Generating JWE encryption key
	2025/11/15 10:00:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:00:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:00:11 Initializing JWE encryption key from synchronized object
	2025/11/15 10:00:11 Creating in-cluster Sidecar client
	2025/11/15 10:00:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:00:11 Serving insecurely on HTTP port: 9090
	2025/11/15 10:00:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:00:11 Starting overwatch
	
	
	==> storage-provisioner [331707db8368c603ce36b86038bdf108888e253d75468dcc135df0b0ff652f38] <==
	I1115 10:00:30.843085       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:00:30.850757       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:00:30.850806       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:00:30.853149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:34.307784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:38.568155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:42.166762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:45.221613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:48.245830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:48.254489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:00:48.255419       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:00:48.255769       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-559401_1db0fae6-4390-4472-9bf1-9c6b157168db!
	I1115 10:00:48.256118       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"74ac0aca-4a5f-408d-9b7f-c3e70ed087ad", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-559401_1db0fae6-4390-4472-9bf1-9c6b157168db became leader
	W1115 10:00:48.265514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:48.270701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:00:48.356678       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-559401_1db0fae6-4390-4472-9bf1-9c6b157168db!
	
	
	==> storage-provisioner [4577add3597913bbb519bd72d03420f5960399f70606bf8c8d70edd2e1e43538] <==
	I1115 10:00:00.094478       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:00:30.100678       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-559401 -n no-preload-559401
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-559401 -n no-preload-559401: exit status 2 (335.219851ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-559401 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-559401
helpers_test.go:243: (dbg) docker inspect no-preload-559401:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e",
	        "Created": "2025-11-15T09:58:30.798243596Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 603315,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:59:50.38857041Z",
	            "FinishedAt": "2025-11-15T09:59:49.493298356Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e/hostname",
	        "HostsPath": "/var/lib/docker/containers/96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e/hosts",
	        "LogPath": "/var/lib/docker/containers/96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e/96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e-json.log",
	        "Name": "/no-preload-559401",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-559401:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-559401",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "96bf94e265bead07f6e73edaf9de82b5fe75a321bec703a172b8b01c20b2697e",
	                "LowerDir": "/var/lib/docker/overlay2/164a1a1235fac955785744348ec2ac413956b6a413469f5c1a071ecc18e0b87f-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/164a1a1235fac955785744348ec2ac413956b6a413469f5c1a071ecc18e0b87f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/164a1a1235fac955785744348ec2ac413956b6a413469f5c1a071ecc18e0b87f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/164a1a1235fac955785744348ec2ac413956b6a413469f5c1a071ecc18e0b87f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-559401",
	                "Source": "/var/lib/docker/volumes/no-preload-559401/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-559401",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-559401",
	                "name.minikube.sigs.k8s.io": "no-preload-559401",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5c955cf39ff6262eb66cd5c9fd33f8a1ba0045fc3bc136e92f4931aa5c42e101",
	            "SandboxKey": "/var/run/docker/netns/5c955cf39ff6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-559401": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9778bfb33840535be1dad946c45c61cf82a33a723dc88bd05e11d71cf2fc0a9f",
	                    "EndpointID": "a48023370a0cbddea144ed8bf93d19cbd309c55de0c990ed0844a839533b7528",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "5e:cf:b8:c1:17:94",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-559401",
	                        "96bf94e265be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-559401 -n no-preload-559401
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-559401 -n no-preload-559401: exit status 2 (342.198074ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-559401 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-559401 logs -n 25: (1.192712005s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-335655 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:59 UTC │
	│ ssh     │ cert-options-759344 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-759344          │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ ssh     │ -p cert-options-759344 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-759344          │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ delete  │ -p cert-options-759344                                                                                                                                                                                                                        │ cert-options-759344          │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:58 UTC │
	│ start   │ -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:58 UTC │ 15 Nov 25 09:59 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-335655 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │                     │
	│ stop    │ -p old-k8s-version-335655 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-559401 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │                     │
	│ stop    │ -p no-preload-559401 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-335655 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ start   │ -p old-k8s-version-335655 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 10:00 UTC │
	│ addons  │ enable dashboard -p no-preload-559401 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ start   │ -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-405833    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ start   │ -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-405833    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p kubernetes-upgrade-405833                                                                                                                                                                                                                  │ kubernetes-upgrade-405833    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p embed-certs-430513 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ image   │ old-k8s-version-335655 image list --format=json                                                                                                                                                                                               │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ pause   │ -p old-k8s-version-335655 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ delete  │ -p old-k8s-version-335655                                                                                                                                                                                                                     │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p old-k8s-version-335655                                                                                                                                                                                                                     │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p disable-driver-mounts-553319                                                                                                                                                                                                               │ disable-driver-mounts-553319 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p default-k8s-diff-port-679865 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ image   │ no-preload-559401 image list --format=json                                                                                                                                                                                                    │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ pause   │ -p no-preload-559401 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:00:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:00:42.348267  613222 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:00:42.348585  613222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:00:42.348597  613222 out.go:374] Setting ErrFile to fd 2...
	I1115 10:00:42.348603  613222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:00:42.348895  613222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:00:42.349540  613222 out.go:368] Setting JSON to false
	I1115 10:00:42.351147  613222 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6183,"bootTime":1763194659,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:00:42.351242  613222 start.go:143] virtualization: kvm guest
	I1115 10:00:42.353315  613222 out.go:179] * [default-k8s-diff-port-679865] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:00:42.354733  613222 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:00:42.354740  613222 notify.go:221] Checking for updates...
	I1115 10:00:42.357282  613222 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:00:42.358849  613222 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:00:42.360922  613222 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 10:00:42.362131  613222 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:00:42.364287  613222 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:00:42.368050  613222 config.go:182] Loaded profile config "cert-expiration-341243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:00:42.368198  613222 config.go:182] Loaded profile config "embed-certs-430513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:00:42.368317  613222 config.go:182] Loaded profile config "no-preload-559401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:00:42.368465  613222 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:00:42.393995  613222 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:00:42.394106  613222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:00:42.453363  613222 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:00:42.442303723 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:00:42.453518  613222 docker.go:319] overlay module found
	I1115 10:00:42.455464  613222 out.go:179] * Using the docker driver based on user configuration
	I1115 10:00:42.456699  613222 start.go:309] selected driver: docker
	I1115 10:00:42.456719  613222 start.go:930] validating driver "docker" against <nil>
	I1115 10:00:42.456734  613222 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:00:42.457333  613222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:00:42.514470  613222 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:00:42.504670414 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:00:42.514625  613222 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:00:42.514863  613222 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:00:42.516631  613222 out.go:179] * Using Docker driver with root privileges
	I1115 10:00:42.517811  613222 cni.go:84] Creating CNI manager for ""
	I1115 10:00:42.517871  613222 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:00:42.517881  613222 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 10:00:42.517959  613222 start.go:353] cluster config:
	{Name:default-k8s-diff-port-679865 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-679865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:00:42.519340  613222 out.go:179] * Starting "default-k8s-diff-port-679865" primary control-plane node in "default-k8s-diff-port-679865" cluster
	I1115 10:00:42.520518  613222 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:00:42.522088  613222 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:00:42.523305  613222 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:00:42.523347  613222 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:00:42.523365  613222 cache.go:65] Caching tarball of preloaded images
	I1115 10:00:42.523425  613222 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:00:42.523508  613222 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:00:42.523526  613222 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:00:42.523639  613222 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/default-k8s-diff-port-679865/config.json ...
	I1115 10:00:42.523667  613222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/default-k8s-diff-port-679865/config.json: {Name:mkd225497c5387e10afe68a2a3044f4b4cc1bc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:00:42.544548  613222 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:00:42.544575  613222 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:00:42.544595  613222 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:00:42.544626  613222 start.go:360] acquireMachinesLock for default-k8s-diff-port-679865: {Name:mke1c48082f838819f77221e2758b30fa6645123 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:00:42.544742  613222 start.go:364] duration metric: took 95.537µs to acquireMachinesLock for "default-k8s-diff-port-679865"
	I1115 10:00:42.544773  613222 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-679865 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-679865 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:00:42.544887  613222 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:00:41.902678  608059 addons.go:515] duration metric: took 531.970039ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:00:42.180648  608059 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-430513" context rescaled to 1 replicas
	W1115 10:00:43.679657  608059 node_ready.go:57] node "embed-certs-430513" has "Ready":"False" status (will retry)
	I1115 10:00:42.547477  613222 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:00:42.547733  613222 start.go:159] libmachine.API.Create for "default-k8s-diff-port-679865" (driver="docker")
	I1115 10:00:42.547770  613222 client.go:173] LocalClient.Create starting
	I1115 10:00:42.547884  613222 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem
	I1115 10:00:42.547925  613222 main.go:143] libmachine: Decoding PEM data...
	I1115 10:00:42.547948  613222 main.go:143] libmachine: Parsing certificate...
	I1115 10:00:42.548038  613222 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem
	I1115 10:00:42.548071  613222 main.go:143] libmachine: Decoding PEM data...
	I1115 10:00:42.548085  613222 main.go:143] libmachine: Parsing certificate...
	I1115 10:00:42.548469  613222 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-679865 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:00:42.567178  613222 cli_runner.go:211] docker network inspect default-k8s-diff-port-679865 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:00:42.567257  613222 network_create.go:284] running [docker network inspect default-k8s-diff-port-679865] to gather additional debugging logs...
	I1115 10:00:42.567281  613222 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-679865
	W1115 10:00:42.584386  613222 cli_runner.go:211] docker network inspect default-k8s-diff-port-679865 returned with exit code 1
	I1115 10:00:42.584438  613222 network_create.go:287] error running [docker network inspect default-k8s-diff-port-679865]: docker network inspect default-k8s-diff-port-679865: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-679865 not found
	I1115 10:00:42.584471  613222 network_create.go:289] output of [docker network inspect default-k8s-diff-port-679865]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-679865 not found
	
	** /stderr **
	I1115 10:00:42.584573  613222 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:00:42.603040  613222 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a8fb985664d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:ab:70:dd:9f:65} reservation:<nil>}
	I1115 10:00:42.603906  613222 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cc9c79f9c19e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:9a:52:90:2e:14} reservation:<nil>}
	I1115 10:00:42.604444  613222 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-309565720ebf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:66:38:13:6a:5d} reservation:<nil>}
	I1115 10:00:42.605169  613222 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b5a35f2144e5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:aa:c4:ce:f8:c4} reservation:<nil>}
	I1115 10:00:42.606050  613222 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d80ed0}
	I1115 10:00:42.606077  613222 network_create.go:124] attempt to create docker network default-k8s-diff-port-679865 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1115 10:00:42.606138  613222 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-679865 default-k8s-diff-port-679865
	I1115 10:00:42.656112  613222 network_create.go:108] docker network default-k8s-diff-port-679865 192.168.85.0/24 created
	I1115 10:00:42.656154  613222 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-679865" container
	I1115 10:00:42.656235  613222 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:00:42.676920  613222 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-679865 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-679865 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:00:42.696061  613222 oci.go:103] Successfully created a docker volume default-k8s-diff-port-679865
	I1115 10:00:42.696250  613222 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-679865-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-679865 --entrypoint /usr/bin/test -v default-k8s-diff-port-679865:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:00:43.096008  613222 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-679865
	I1115 10:00:43.096073  613222 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:00:43.096086  613222 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:00:43.096157  613222 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-679865:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 15 10:00:11 no-preload-559401 crio[567]: time="2025-11-15T10:00:11.305636209Z" level=info msg="Created container 912d49aca42e20b1cb1e878d980787139033be9df50be4c3747a4673ed5b111b: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nhbwb/kubernetes-dashboard" id=7bc4a7ae-30a2-4fad-8dfb-e9721631d2eb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:11 no-preload-559401 crio[567]: time="2025-11-15T10:00:11.306331641Z" level=info msg="Starting container: 912d49aca42e20b1cb1e878d980787139033be9df50be4c3747a4673ed5b111b" id=58ea131e-3e3c-4a69-94a0-5020a1f100d8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:00:11 no-preload-559401 crio[567]: time="2025-11-15T10:00:11.308429897Z" level=info msg="Started container" PID=1726 containerID=912d49aca42e20b1cb1e878d980787139033be9df50be4c3747a4673ed5b111b description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nhbwb/kubernetes-dashboard id=58ea131e-3e3c-4a69-94a0-5020a1f100d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc3e3c412fd0f9c1ed1e9cca18e469ffd3a5a927eb16328d9557b376216734cf
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.637973682Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1ae8199d-94d5-47fa-953a-cff7d8dbebb5 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.641326757Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fd005ac3-828c-47c4-8fd3-4a35b41b7c68 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.644690213Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb/dashboard-metrics-scraper" id=fc2025df-2ccd-487a-9438-9a51dfdbb4ed name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.644843428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.651995675Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.652571057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.683846145Z" level=info msg="Created container 03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb/dashboard-metrics-scraper" id=fc2025df-2ccd-487a-9438-9a51dfdbb4ed name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.684609437Z" level=info msg="Starting container: 03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672" id=948c539e-6cb9-4e8b-9a37-fdf81533dbea name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.687253616Z" level=info msg="Started container" PID=1744 containerID=03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb/dashboard-metrics-scraper id=948c539e-6cb9-4e8b-9a37-fdf81533dbea name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0d99edc61b619ceb945a31f3b74de01f1801ecd121ffff9178bec94a8ad6aa0
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.772378772Z" level=info msg="Removing container: 8554460fc29ffc07ad5f3396ac4fbb137674d7b675cb743c0b0a69912bcbb2e4" id=f1fa5cc9-f52d-47e6-b8d2-e4a7b608253c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:00:24 no-preload-559401 crio[567]: time="2025-11-15T10:00:24.78249811Z" level=info msg="Removed container 8554460fc29ffc07ad5f3396ac4fbb137674d7b675cb743c0b0a69912bcbb2e4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb/dashboard-metrics-scraper" id=f1fa5cc9-f52d-47e6-b8d2-e4a7b608253c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.788035831Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e31520fb-6bee-408f-b690-f5f24708257d name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.788974763Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=503a1b34-1150-4c54-a700-d5d5d26a2580 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.789969232Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f9213084-9722-4343-a66b-fa5e5b5eb561 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.790126256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.794803059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.795002399Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/86a0aa26ac914873b22dbf5b0bc2bc7b83c0f92de1b4b410b586a2c2c0304b70/merged/etc/passwd: no such file or directory"
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.795038429Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/86a0aa26ac914873b22dbf5b0bc2bc7b83c0f92de1b4b410b586a2c2c0304b70/merged/etc/group: no such file or directory"
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.795334931Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.827001432Z" level=info msg="Created container 331707db8368c603ce36b86038bdf108888e253d75468dcc135df0b0ff652f38: kube-system/storage-provisioner/storage-provisioner" id=f9213084-9722-4343-a66b-fa5e5b5eb561 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.827678615Z" level=info msg="Starting container: 331707db8368c603ce36b86038bdf108888e253d75468dcc135df0b0ff652f38" id=15cef3ec-c069-46a0-86c4-71f018e20ee3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:00:30 no-preload-559401 crio[567]: time="2025-11-15T10:00:30.829834304Z" level=info msg="Started container" PID=1758 containerID=331707db8368c603ce36b86038bdf108888e253d75468dcc135df0b0ff652f38 description=kube-system/storage-provisioner/storage-provisioner id=15cef3ec-c069-46a0-86c4-71f018e20ee3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=321937c3bac5713836616b940c92c0cd46d921bbe68c4713db8c4c068a57b5ac
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	331707db8368c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   321937c3bac57       storage-provisioner                          kube-system
	03e38d186cb33       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   f0d99edc61b61       dashboard-metrics-scraper-6ffb444bf9-vn2wb   kubernetes-dashboard
	912d49aca42e2       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   bc3e3c412fd0f       kubernetes-dashboard-855c9754f9-nhbwb        kubernetes-dashboard
	81fd5af3b453a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   b7d7a7fe4b86e       busybox                                      default
	d844e71d2c4dc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   6a2744f7b4898       kindnet-b5x55                                kube-system
	7d47a971f6af8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   97d88ea443454       coredns-66bc5c9577-dh55n                     kube-system
	5c2dfc91efbcd       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   d7713abd7591c       kube-proxy-sbk5r                             kube-system
	4577add359791       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   321937c3bac57       storage-provisioner                          kube-system
	0e0c907536637       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   bb8bd7fc1e620       etcd-no-preload-559401                       kube-system
	8895096ed1181       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   e11eae2b9fc32       kube-apiserver-no-preload-559401             kube-system
	6ac889c115f00       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   fcfa66159fe7d       kube-scheduler-no-preload-559401             kube-system
	e1a7a97a08ef5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   5338fb0e0de1a       kube-controller-manager-no-preload-559401    kube-system
	
	
	==> coredns [7d47a971f6af8ecd8aa0f07da9138293117a44b7e6908c8ae2a89bfb25fb9c01] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60249 - 56676 "HINFO IN 2994477425395392846.3722443190762492695. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020812239s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-559401
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-559401
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=no-preload-559401
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_59_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:58:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-559401
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:00:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:00:30 +0000   Sat, 15 Nov 2025 09:58:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:00:30 +0000   Sat, 15 Nov 2025 09:58:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:00:30 +0000   Sat, 15 Nov 2025 09:58:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:00:30 +0000   Sat, 15 Nov 2025 09:59:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-559401
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                952f299f-14db-4c2b-b6e4-27ef9280d1fa
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-dh55n                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-no-preload-559401                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-b5x55                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-no-preload-559401              250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-no-preload-559401     200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-sbk5r                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-no-preload-559401              100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vn2wb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-nhbwb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  109s               kubelet          Node no-preload-559401 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s               kubelet          Node no-preload-559401 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s               kubelet          Node no-preload-559401 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s               node-controller  Node no-preload-559401 event: Registered Node no-preload-559401 in Controller
	  Normal  NodeReady                90s                kubelet          Node no-preload-559401 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node no-preload-559401 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node no-preload-559401 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node no-preload-559401 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node no-preload-559401 event: Registered Node no-preload-559401 in Controller
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [0e0c907536637f4671373a2fb17787378e0cb3601c00f76492ee5288116e81c8] <==
	{"level":"warn","ts":"2025-11-15T09:59:58.322863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.329238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.336725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.344642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.352823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.365472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.377588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.382010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.391854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.425559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.432976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.444951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.453718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:59:58.525176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57764","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T10:00:07.863987Z","caller":"traceutil/trace.go:172","msg":"trace[388600050] transaction","detail":"{read_only:false; response_revision:574; number_of_response:1; }","duration":"139.810313ms","start":"2025-11-15T10:00:07.724144Z","end":"2025-11-15T10:00:07.863955Z","steps":["trace[388600050] 'process raft request'  (duration: 87.498788ms)","trace[388600050] 'compare'  (duration: 52.172882ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:00:19.870665Z","caller":"traceutil/trace.go:172","msg":"trace[1819233708] linearizableReadLoop","detail":"{readStateIndex:632; appliedIndex:632; }","duration":"129.082882ms","start":"2025-11-15T10:00:19.741558Z","end":"2025-11-15T10:00:19.870641Z","steps":["trace[1819233708] 'read index received'  (duration: 129.072692ms)","trace[1819233708] 'applied index is now lower than readState.Index'  (duration: 8.935µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:00:20.012095Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"302.974995ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-15T10:00:20.012172Z","caller":"traceutil/trace.go:172","msg":"trace[837370168] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:599; }","duration":"303.065083ms","start":"2025-11-15T10:00:19.709094Z","end":"2025-11-15T10:00:20.012159Z","steps":["trace[837370168] 'agreement among raft nodes before linearized reading'  (duration: 161.636998ms)","trace[837370168] 'range keys from in-memory index tree'  (duration: 141.321189ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:00:20.012881Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.558123ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790031518912284 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ndyt7xoch7sqiwsj3fyujicvdm\" mod_revision:579 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ndyt7xoch7sqiwsj3fyujicvdm\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ndyt7xoch7sqiwsj3fyujicvdm\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:00:20.012974Z","caller":"traceutil/trace.go:172","msg":"trace[1102186639] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"331.341649ms","start":"2025-11-15T10:00:19.681615Z","end":"2025-11-15T10:00:20.012956Z","steps":["trace[1102186639] 'process raft request'  (duration: 189.108961ms)","trace[1102186639] 'compare'  (duration: 141.239371ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:00:20.013043Z","caller":"traceutil/trace.go:172","msg":"trace[856752022] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"248.946417ms","start":"2025-11-15T10:00:19.764086Z","end":"2025-11-15T10:00:20.013033Z","steps":["trace[856752022] 'process raft request'  (duration: 248.875267ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:00:20.013178Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"268.581135ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-559401\" limit:1 ","response":"range_response_count:1 size:4876"}
	{"level":"info","ts":"2025-11-15T10:00:20.013200Z","caller":"traceutil/trace.go:172","msg":"trace[95401798] range","detail":"{range_begin:/registry/minions/no-preload-559401; range_end:; response_count:1; response_revision:601; }","duration":"268.606514ms","start":"2025-11-15T10:00:19.744588Z","end":"2025-11-15T10:00:20.013194Z","steps":["trace[95401798] 'agreement among raft nodes before linearized reading'  (duration: 268.548448ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:00:20.013072Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T10:00:19.681594Z","time spent":"331.42897ms","remote":"127.0.0.1:57090","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ndyt7xoch7sqiwsj3fyujicvdm\" mod_revision:579 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ndyt7xoch7sqiwsj3fyujicvdm\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ndyt7xoch7sqiwsj3fyujicvdm\" > >"}
	{"level":"info","ts":"2025-11-15T10:00:20.013122Z","caller":"traceutil/trace.go:172","msg":"trace[1509135871] linearizableReadLoop","detail":"{readStateIndex:633; appliedIndex:632; }","duration":"142.375897ms","start":"2025-11-15T10:00:19.870736Z","end":"2025-11-15T10:00:20.013112Z","steps":["trace[1509135871] 'read index received'  (duration: 23.436059ms)","trace[1509135871] 'applied index is now lower than readState.Index'  (duration: 118.938961ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:00:50 up  1:43,  0 user,  load average: 2.64, 2.44, 1.72
	Linux no-preload-559401 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d844e71d2c4dcba665557eaabf99f9fdf94b2403dcfc278ac27d957559053a0a] <==
	I1115 10:00:00.381286       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:00:00.381649       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1115 10:00:00.381958       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:00:00.382020       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:00:00.382065       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:00:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:00:00.626161       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:00:00.781093       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:00:00.781116       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:00:00.781287       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:00:00.981342       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:00:00.981371       1 metrics.go:72] Registering metrics
	I1115 10:00:00.981459       1 controller.go:711] "Syncing nftables rules"
	I1115 10:00:10.625668       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 10:00:10.625751       1 main.go:301] handling current node
	I1115 10:00:20.625667       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 10:00:20.625710       1 main.go:301] handling current node
	I1115 10:00:30.626022       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 10:00:30.626070       1 main.go:301] handling current node
	I1115 10:00:40.632500       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 10:00:40.632529       1 main.go:301] handling current node
	I1115 10:00:50.634868       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1115 10:00:50.634914       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8895096ed11812bd45be0812f3ddacb441137c37505fa8846ad04fb1c033843b] <==
	I1115 09:59:59.273171       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 09:59:59.274102       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 09:59:59.274651       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 09:59:59.274846       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 09:59:59.284284       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 09:59:59.288719       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 09:59:59.288924       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 09:59:59.288952       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 09:59:59.299701       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 09:59:59.305676       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 09:59:59.314652       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 09:59:59.314753       1 policy_source.go:240] refreshing policies
	I1115 09:59:59.328509       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 09:59:59.343018       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:59:59.686164       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 09:59:59.737104       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 09:59:59.744884       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 09:59:59.784123       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 09:59:59.800864       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 09:59:59.882529       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.226.220"}
	I1115 09:59:59.900144       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.203.15"}
	I1115 10:00:00.083597       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:00:02.558928       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:00:03.008929       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:00:03.062163       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e1a7a97a08ef5ef64767e999edbcfdfc0ad52e1760fecfbba7b4ca857c71ea4b] <==
	I1115 10:00:02.555796       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:00:02.555810       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:00:02.555860       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:00:02.555900       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:00:02.555945       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:00:02.556028       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:00:02.556741       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:00:02.556809       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:00:02.557191       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:00:02.557244       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:00:02.559734       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:00:02.559839       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:00:02.560292       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:00:02.560480       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-559401"
	I1115 10:00:02.560533       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1115 10:00:02.562487       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:00:02.563257       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:00:02.565199       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:00:02.565260       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:00:02.567354       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:00:02.574584       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:00:02.574602       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:00:02.574611       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:00:02.577318       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:00:02.587979       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5c2dfc91efbcd6fc8a96bb97ab98fffb24a7769e1d692bc2a99b9906e2394220] <==
	I1115 10:00:00.160058       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:00:00.232804       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:00:00.333040       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:00:00.333151       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1115 10:00:00.333284       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:00:00.364517       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:00:00.364659       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:00:00.388021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:00:00.388348       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:00:00.388650       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:00:00.391021       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:00:00.391045       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:00:00.391081       1 config.go:200] "Starting service config controller"
	I1115 10:00:00.391093       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:00:00.391113       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:00:00.391124       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:00:00.391159       1 config.go:309] "Starting node config controller"
	I1115 10:00:00.391197       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:00:00.391222       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:00:00.491216       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:00:00.491340       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:00:00.491416       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6ac889c115f00328ac4c19198ba12abd9a0f7d168f55ba530681cda91918cbf8] <==
	I1115 09:59:58.604433       1 serving.go:386] Generated self-signed cert in-memory
	I1115 10:00:00.146972       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:00:00.148665       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:00:00.156005       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 10:00:00.156191       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 10:00:00.156024       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:00:00.156018       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:00:00.157944       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:00:00.157963       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:00:00.165226       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:00:00.165507       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:00:00.257487       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 10:00:00.258643       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:00:00.258557       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:00:03 no-preload-559401 kubelet[716]: I1115 10:00:03.258019     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2d9ce6e2-8199-4088-ad8b-2678ace0fb0a-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vn2wb\" (UID: \"2d9ce6e2-8199-4088-ad8b-2678ace0fb0a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb"
	Nov 15 10:00:03 no-preload-559401 kubelet[716]: I1115 10:00:03.258049     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6pq9\" (UniqueName: \"kubernetes.io/projected/2d9ce6e2-8199-4088-ad8b-2678ace0fb0a-kube-api-access-h6pq9\") pod \"dashboard-metrics-scraper-6ffb444bf9-vn2wb\" (UID: \"2d9ce6e2-8199-4088-ad8b-2678ace0fb0a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb"
	Nov 15 10:00:03 no-preload-559401 kubelet[716]: I1115 10:00:03.258095     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd6kq\" (UniqueName: \"kubernetes.io/projected/b2804b3e-3418-4b75-93a0-a568ca6de288-kube-api-access-rd6kq\") pod \"kubernetes-dashboard-855c9754f9-nhbwb\" (UID: \"b2804b3e-3418-4b75-93a0-a568ca6de288\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nhbwb"
	Nov 15 10:00:06 no-preload-559401 kubelet[716]: I1115 10:00:06.716713     716 scope.go:117] "RemoveContainer" containerID="770fbd283a29a9c5353934276fbf6dd9103402264f4fbdfb1661304eb99998d0"
	Nov 15 10:00:07 no-preload-559401 kubelet[716]: I1115 10:00:07.721609     716 scope.go:117] "RemoveContainer" containerID="770fbd283a29a9c5353934276fbf6dd9103402264f4fbdfb1661304eb99998d0"
	Nov 15 10:00:07 no-preload-559401 kubelet[716]: I1115 10:00:07.721765     716 scope.go:117] "RemoveContainer" containerID="8554460fc29ffc07ad5f3396ac4fbb137674d7b675cb743c0b0a69912bcbb2e4"
	Nov 15 10:00:07 no-preload-559401 kubelet[716]: E1115 10:00:07.721953     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn2wb_kubernetes-dashboard(2d9ce6e2-8199-4088-ad8b-2678ace0fb0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb" podUID="2d9ce6e2-8199-4088-ad8b-2678ace0fb0a"
	Nov 15 10:00:08 no-preload-559401 kubelet[716]: I1115 10:00:08.727479     716 scope.go:117] "RemoveContainer" containerID="8554460fc29ffc07ad5f3396ac4fbb137674d7b675cb743c0b0a69912bcbb2e4"
	Nov 15 10:00:08 no-preload-559401 kubelet[716]: E1115 10:00:08.727639     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn2wb_kubernetes-dashboard(2d9ce6e2-8199-4088-ad8b-2678ace0fb0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb" podUID="2d9ce6e2-8199-4088-ad8b-2678ace0fb0a"
	Nov 15 10:00:09 no-preload-559401 kubelet[716]: I1115 10:00:09.730080     716 scope.go:117] "RemoveContainer" containerID="8554460fc29ffc07ad5f3396ac4fbb137674d7b675cb743c0b0a69912bcbb2e4"
	Nov 15 10:00:09 no-preload-559401 kubelet[716]: E1115 10:00:09.730279     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn2wb_kubernetes-dashboard(2d9ce6e2-8199-4088-ad8b-2678ace0fb0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb" podUID="2d9ce6e2-8199-4088-ad8b-2678ace0fb0a"
	Nov 15 10:00:14 no-preload-559401 kubelet[716]: I1115 10:00:14.090535     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nhbwb" podStartSLOduration=3.35800339 podStartE2EDuration="11.090504427s" podCreationTimestamp="2025-11-15 10:00:03 +0000 UTC" firstStartedPulling="2025-11-15 10:00:03.532819644 +0000 UTC m=+6.987891833" lastFinishedPulling="2025-11-15 10:00:11.26532068 +0000 UTC m=+14.720392870" observedRunningTime="2025-11-15 10:00:11.752328639 +0000 UTC m=+15.207400844" watchObservedRunningTime="2025-11-15 10:00:14.090504427 +0000 UTC m=+17.545576633"
	Nov 15 10:00:24 no-preload-559401 kubelet[716]: I1115 10:00:24.637407     716 scope.go:117] "RemoveContainer" containerID="8554460fc29ffc07ad5f3396ac4fbb137674d7b675cb743c0b0a69912bcbb2e4"
	Nov 15 10:00:24 no-preload-559401 kubelet[716]: I1115 10:00:24.770610     716 scope.go:117] "RemoveContainer" containerID="8554460fc29ffc07ad5f3396ac4fbb137674d7b675cb743c0b0a69912bcbb2e4"
	Nov 15 10:00:24 no-preload-559401 kubelet[716]: I1115 10:00:24.770836     716 scope.go:117] "RemoveContainer" containerID="03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672"
	Nov 15 10:00:24 no-preload-559401 kubelet[716]: E1115 10:00:24.771029     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn2wb_kubernetes-dashboard(2d9ce6e2-8199-4088-ad8b-2678ace0fb0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb" podUID="2d9ce6e2-8199-4088-ad8b-2678ace0fb0a"
	Nov 15 10:00:27 no-preload-559401 kubelet[716]: I1115 10:00:27.755782     716 scope.go:117] "RemoveContainer" containerID="03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672"
	Nov 15 10:00:27 no-preload-559401 kubelet[716]: E1115 10:00:27.756017     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn2wb_kubernetes-dashboard(2d9ce6e2-8199-4088-ad8b-2678ace0fb0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb" podUID="2d9ce6e2-8199-4088-ad8b-2678ace0fb0a"
	Nov 15 10:00:30 no-preload-559401 kubelet[716]: I1115 10:00:30.787631     716 scope.go:117] "RemoveContainer" containerID="4577add3597913bbb519bd72d03420f5960399f70606bf8c8d70edd2e1e43538"
	Nov 15 10:00:40 no-preload-559401 kubelet[716]: I1115 10:00:40.639127     716 scope.go:117] "RemoveContainer" containerID="03e38d186cb33b81fe686e7929c9c082a06786416e10958c9b2fdeb001e6c672"
	Nov 15 10:00:40 no-preload-559401 kubelet[716]: E1115 10:00:40.639342     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn2wb_kubernetes-dashboard(2d9ce6e2-8199-4088-ad8b-2678ace0fb0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn2wb" podUID="2d9ce6e2-8199-4088-ad8b-2678ace0fb0a"
	Nov 15 10:00:46 no-preload-559401 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:00:46 no-preload-559401 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:00:46 no-preload-559401 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 15 10:00:46 no-preload-559401 systemd[1]: kubelet.service: Consumed 1.638s CPU time.
	
	
	==> kubernetes-dashboard [912d49aca42e20b1cb1e878d980787139033be9df50be4c3747a4673ed5b111b] <==
	2025/11/15 10:00:11 Using namespace: kubernetes-dashboard
	2025/11/15 10:00:11 Using in-cluster config to connect to apiserver
	2025/11/15 10:00:11 Using secret token for csrf signing
	2025/11/15 10:00:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:00:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:00:11 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:00:11 Generating JWE encryption key
	2025/11/15 10:00:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:00:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:00:11 Initializing JWE encryption key from synchronized object
	2025/11/15 10:00:11 Creating in-cluster Sidecar client
	2025/11/15 10:00:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:00:11 Serving insecurely on HTTP port: 9090
	2025/11/15 10:00:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:00:11 Starting overwatch
	
	
	==> storage-provisioner [331707db8368c603ce36b86038bdf108888e253d75468dcc135df0b0ff652f38] <==
	I1115 10:00:30.843085       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:00:30.850757       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:00:30.850806       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:00:30.853149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:34.307784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:38.568155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:42.166762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:45.221613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:48.245830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:48.254489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:00:48.255419       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:00:48.255769       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-559401_1db0fae6-4390-4472-9bf1-9c6b157168db!
	I1115 10:00:48.256118       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"74ac0aca-4a5f-408d-9b7f-c3e70ed087ad", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-559401_1db0fae6-4390-4472-9bf1-9c6b157168db became leader
	W1115 10:00:48.265514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:48.270701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:00:48.356678       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-559401_1db0fae6-4390-4472-9bf1-9c6b157168db!
	W1115 10:00:50.273595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:50.278079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [4577add3597913bbb519bd72d03420f5960399f70606bf8c8d70edd2e1e43538] <==
	I1115 10:00:00.094478       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:00:30.100678       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-559401 -n no-preload-559401
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-559401 -n no-preload-559401: exit status 2 (351.033016ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-559401 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-430513 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-430513 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (280.766481ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:01:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-430513 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-430513 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-430513 describe deploy/metrics-server -n kube-system: exit status 1 (70.410658ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-430513 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-430513
helpers_test.go:243: (dbg) docker inspect embed-certs-430513:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307",
	        "Created": "2025-11-15T10:00:21.0128724Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 608652,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:00:21.047148075Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307/hosts",
	        "LogPath": "/var/lib/docker/containers/0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307/0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307-json.log",
	        "Name": "/embed-certs-430513",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-430513:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-430513",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307",
	                "LowerDir": "/var/lib/docker/overlay2/076ef13396d6f2f2b6cb3a382a4ea2c5e0a16b7306168cd425e3d6324e5d05af-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/076ef13396d6f2f2b6cb3a382a4ea2c5e0a16b7306168cd425e3d6324e5d05af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/076ef13396d6f2f2b6cb3a382a4ea2c5e0a16b7306168cd425e3d6324e5d05af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/076ef13396d6f2f2b6cb3a382a4ea2c5e0a16b7306168cd425e3d6324e5d05af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-430513",
	                "Source": "/var/lib/docker/volumes/embed-certs-430513/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-430513",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-430513",
	                "name.minikube.sigs.k8s.io": "embed-certs-430513",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3fc932f993747d54f1ba43c06ae90e295cb231ffe92d3a8c8a6751880cec1f3f",
	            "SandboxKey": "/var/run/docker/netns/3fc932f99374",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-430513": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b5a35f2144e5ffd9ac7511406e9418188a3c5784e35110b679aaeaa5b02f5ee9",
	                    "EndpointID": "b81fe08fde73e54479a39c35e5fe7aad470601f841277bccf170ad6988f22c19",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ba:ad:8e:80:4d:3c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-430513",
	                        "0d1528353148"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-430513 -n embed-certs-430513
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-430513 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-430513 logs -n 25: (1.096798102s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p old-k8s-version-335655 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │                     │
	│ stop    │ -p old-k8s-version-335655 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-559401 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │                     │
	│ stop    │ -p no-preload-559401 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-335655 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ start   │ -p old-k8s-version-335655 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 10:00 UTC │
	│ addons  │ enable dashboard -p no-preload-559401 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ start   │ -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-405833    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ start   │ -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-405833    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p kubernetes-upgrade-405833                                                                                                                                                                                                                  │ kubernetes-upgrade-405833    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p embed-certs-430513 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ image   │ old-k8s-version-335655 image list --format=json                                                                                                                                                                                               │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ pause   │ -p old-k8s-version-335655 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ delete  │ -p old-k8s-version-335655                                                                                                                                                                                                                     │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p old-k8s-version-335655                                                                                                                                                                                                                     │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p disable-driver-mounts-553319                                                                                                                                                                                                               │ disable-driver-mounts-553319 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p default-k8s-diff-port-679865 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ image   │ no-preload-559401 image list --format=json                                                                                                                                                                                                    │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ pause   │ -p no-preload-559401 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ delete  │ -p no-preload-559401                                                                                                                                                                                                                          │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p no-preload-559401                                                                                                                                                                                                                          │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p newest-cni-783113 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ start   │ -p cert-expiration-341243 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-341243       │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-430513 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:01:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:01:02.634810  619148 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:01:02.634955  619148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:01:02.634960  619148 out.go:374] Setting ErrFile to fd 2...
	I1115 10:01:02.634966  619148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:01:02.635254  619148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:01:02.635838  619148 out.go:368] Setting JSON to false
	I1115 10:01:02.637474  619148 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6204,"bootTime":1763194659,"procs":332,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:01:02.637627  619148 start.go:143] virtualization: kvm guest
	I1115 10:01:02.640026  619148 out.go:179] * [cert-expiration-341243] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:01:02.641507  619148 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:01:02.641514  619148 notify.go:221] Checking for updates...
	I1115 10:01:02.643877  619148 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:01:02.645120  619148 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:01:02.646333  619148 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 10:01:02.647595  619148 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:01:02.648778  619148 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:01:02.650459  619148 config.go:182] Loaded profile config "cert-expiration-341243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:02.650968  619148 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:01:02.679869  619148 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:01:02.679958  619148 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:01:02.738647  619148 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-15 10:01:02.728950174 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:01:02.738749  619148 docker.go:319] overlay module found
	I1115 10:01:02.740297  619148 out.go:179] * Using the docker driver based on existing profile
	I1115 10:01:02.741492  619148 start.go:309] selected driver: docker
	I1115 10:01:02.741513  619148 start.go:930] validating driver "docker" against &{Name:cert-expiration-341243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-341243 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:01:02.741603  619148 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:01:02.742273  619148 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:01:02.805663  619148 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-15 10:01:02.795470122 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:01:02.805909  619148 cni.go:84] Creating CNI manager for ""
	I1115 10:01:02.805955  619148 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:01:02.805981  619148 start.go:353] cluster config:
	{Name:cert-expiration-341243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-341243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1115 10:01:02.807780  619148 out.go:179] * Starting "cert-expiration-341243" primary control-plane node in "cert-expiration-341243" cluster
	I1115 10:01:02.808981  619148 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:01:02.810359  619148 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:01:02.811598  619148 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:01:02.811631  619148 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:01:02.811649  619148 cache.go:65] Caching tarball of preloaded images
	I1115 10:01:02.811679  619148 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:01:02.811738  619148 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:01:02.811744  619148 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:01:02.811836  619148 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/cert-expiration-341243/config.json ...
	I1115 10:01:02.832730  619148 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:01:02.832740  619148 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:01:02.832756  619148 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:01:02.832783  619148 start.go:360] acquireMachinesLock for cert-expiration-341243: {Name:mkc714674589e6f5f7e7f4503f60c2cddf631c29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:01:02.832840  619148 start.go:364] duration metric: took 35.419µs to acquireMachinesLock for "cert-expiration-341243"
	I1115 10:01:02.832855  619148 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:01:02.832859  619148 fix.go:54] fixHost starting: 
	I1115 10:01:02.833049  619148 cli_runner.go:164] Run: docker container inspect cert-expiration-341243 --format={{.State.Status}}
	I1115 10:01:02.849536  619148 fix.go:112] recreateIfNeeded on cert-expiration-341243: state=Running err=<nil>
	W1115 10:01:02.849555  619148 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:01:00.180812  617563 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-783113:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.584857669s)
	I1115 10:01:00.180848  617563 kic.go:203] duration metric: took 4.585034838s to extract preloaded images to volume ...
	W1115 10:01:00.180957  617563 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1115 10:01:00.181000  617563 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1115 10:01:00.181043  617563 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:01:00.258113  617563 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-783113 --name newest-cni-783113 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-783113 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-783113 --network newest-cni-783113 --ip 192.168.103.2 --volume newest-cni-783113:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:01:00.632607  617563 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Running}}
	I1115 10:01:00.656072  617563 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:00.677343  617563 cli_runner.go:164] Run: docker exec newest-cni-783113 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:01:00.730109  617563 oci.go:144] the created container "newest-cni-783113" has a running status.
	I1115 10:01:00.730146  617563 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa...
	I1115 10:01:01.535498  617563 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:01:01.564269  617563 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:01.586950  617563 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:01:01.586980  617563 kic_runner.go:114] Args: [docker exec --privileged newest-cni-783113 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:01:01.633522  617563 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:01.656667  617563 machine.go:94] provisionDockerMachine start ...
	I1115 10:01:01.656828  617563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:01.679927  617563 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:01.680623  617563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1115 10:01:01.680657  617563 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:01:01.811787  617563 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-783113
	
	I1115 10:01:01.811823  617563 ubuntu.go:182] provisioning hostname "newest-cni-783113"
	I1115 10:01:01.811897  617563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:01.831916  617563 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:01.832166  617563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1115 10:01:01.832182  617563 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-783113 && echo "newest-cni-783113" | sudo tee /etc/hostname
	I1115 10:01:01.977087  617563 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-783113
	
	I1115 10:01:01.977188  617563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:01.998870  617563 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:01.999172  617563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1115 10:01:01.999202  617563 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-783113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-783113/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-783113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:01:02.146803  617563 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:01:02.146847  617563 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 10:01:02.146900  617563 ubuntu.go:190] setting up certificates
	I1115 10:01:02.146923  617563 provision.go:84] configureAuth start
	I1115 10:01:02.147000  617563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-783113
	I1115 10:01:02.180058  617563 provision.go:143] copyHostCerts
	I1115 10:01:02.180145  617563 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 10:01:02.180160  617563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 10:01:02.180266  617563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 10:01:02.180385  617563 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 10:01:02.180411  617563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 10:01:02.180466  617563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 10:01:02.180557  617563 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 10:01:02.180567  617563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 10:01:02.180607  617563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 10:01:02.180682  617563 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.newest-cni-783113 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-783113]
	I1115 10:01:02.293925  617563 provision.go:177] copyRemoteCerts
	I1115 10:01:02.294015  617563 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:01:02.294070  617563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:02.314585  617563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:02.409835  617563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:01:02.429563  617563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:01:02.447600  617563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:01:02.465900  617563 provision.go:87] duration metric: took 318.957757ms to configureAuth
	I1115 10:01:02.465930  617563 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:01:02.466108  617563 config.go:182] Loaded profile config "newest-cni-783113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:02.466235  617563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:02.484347  617563 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:02.484601  617563 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I1115 10:01:02.484657  617563 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:01:02.758565  617563 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:01:02.758597  617563 machine.go:97] duration metric: took 1.101903942s to provisionDockerMachine
	I1115 10:01:02.758610  617563 client.go:176] duration metric: took 7.704036769s to LocalClient.Create
	I1115 10:01:02.758628  617563 start.go:167] duration metric: took 7.704100629s to libmachine.API.Create "newest-cni-783113"
	I1115 10:01:02.758639  617563 start.go:293] postStartSetup for "newest-cni-783113" (driver="docker")
	I1115 10:01:02.758652  617563 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:01:02.758756  617563 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:01:02.758826  617563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:02.779627  617563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:02.880512  617563 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:01:02.884577  617563 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:01:02.884614  617563 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:01:02.884626  617563 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 10:01:02.884685  617563 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 10:01:02.884762  617563 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 10:01:02.884849  617563 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:01:02.893468  617563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:01:02.915240  617563 start.go:296] duration metric: took 156.583351ms for postStartSetup
	I1115 10:01:02.915696  617563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-783113
	I1115 10:01:02.936465  617563 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/config.json ...
	I1115 10:01:02.936739  617563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:01:02.936800  617563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:02.955148  617563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:03.047617  617563 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:01:03.052447  617563 start.go:128] duration metric: took 8.000217455s to createHost
	I1115 10:01:03.052472  617563 start.go:83] releasing machines lock for "newest-cni-783113", held for 8.000381775s
	I1115 10:01:03.052548  617563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-783113
	I1115 10:01:03.072611  617563 ssh_runner.go:195] Run: cat /version.json
	I1115 10:01:03.072663  617563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:03.072706  617563 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:01:03.072787  617563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:03.093059  617563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:03.093502  617563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:03.189647  617563 ssh_runner.go:195] Run: systemctl --version
	I1115 10:01:03.263117  617563 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:01:03.306683  617563 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:01:03.312341  617563 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:01:03.312445  617563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:01:03.342208  617563 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:01:03.342236  617563 start.go:496] detecting cgroup driver to use...
	I1115 10:01:03.342271  617563 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 10:01:03.342433  617563 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:01:03.363203  617563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:01:03.379372  617563 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:01:03.379478  617563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:01:03.401181  617563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:01:03.422758  617563 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:01:03.536213  617563 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:01:03.649441  617563 docker.go:234] disabling docker service ...
	I1115 10:01:03.649518  617563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:01:03.674479  617563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:01:03.692358  617563 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:01:03.793990  617563 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:01:03.894632  617563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:01:03.907910  617563 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:01:03.922660  617563 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:01:03.922729  617563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:03.933553  617563 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 10:01:03.933621  617563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:03.945662  617563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:03.955570  617563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:03.965523  617563 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:01:03.974810  617563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:03.984204  617563 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:03.998668  617563 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:04.007795  617563 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:01:04.016144  617563 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:01:04.024259  617563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:04.108844  617563 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:01:04.235707  617563 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:01:04.235785  617563 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:01:04.240223  617563 start.go:564] Will wait 60s for crictl version
	I1115 10:01:04.240288  617563 ssh_runner.go:195] Run: which crictl
	I1115 10:01:04.244587  617563 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:01:04.281673  617563 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:01:04.281789  617563 ssh_runner.go:195] Run: crio --version
	I1115 10:01:04.310650  617563 ssh_runner.go:195] Run: crio --version
	I1115 10:01:04.347708  617563 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:01:04.349051  617563 cli_runner.go:164] Run: docker network inspect newest-cni-783113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:01:04.370357  617563 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 10:01:04.375481  617563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:01:04.391129  617563 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1115 10:01:04.392369  617563 kubeadm.go:884] updating cluster {Name:newest-cni-783113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-783113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:01:04.392641  617563 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:01:04.392722  617563 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:01:04.430343  617563 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:01:04.430367  617563 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:01:04.430435  617563 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:01:04.459720  617563 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:01:04.459751  617563 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:01:04.459762  617563 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 10:01:04.459887  617563 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-783113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-783113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:01:04.459977  617563 ssh_runner.go:195] Run: crio config
	I1115 10:01:04.529677  617563 cni.go:84] Creating CNI manager for ""
	I1115 10:01:04.529705  617563 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:01:04.529729  617563 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 10:01:04.529759  617563 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-783113 NodeName:newest-cni-783113 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:01:04.529936  617563 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-783113"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:01:04.530025  617563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:01:04.538555  617563 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:01:04.538613  617563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:01:04.546742  617563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:01:04.560014  617563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:01:04.575753  617563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1115 10:01:04.593741  617563 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:01:04.598915  617563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:01:04.615013  617563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:04.710054  617563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:01:04.739588  617563 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113 for IP: 192.168.103.2
	I1115 10:01:04.739614  617563 certs.go:195] generating shared ca certs ...
	I1115 10:01:04.739641  617563 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:04.739808  617563 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 10:01:04.739865  617563 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 10:01:04.739879  617563 certs.go:257] generating profile certs ...
	I1115 10:01:04.739954  617563 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/client.key
	I1115 10:01:04.739987  617563 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/client.crt with IP's: []
	I1115 10:01:05.226300  613222 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:01:05.226370  613222 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:01:05.226490  613222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:01:05.226579  613222 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:01:05.226625  613222 kubeadm.go:319] OS: Linux
	I1115 10:01:05.226684  613222 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:01:05.226772  613222 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:01:05.226833  613222 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:01:05.226895  613222 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:01:05.226960  613222 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:01:05.227040  613222 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:01:05.227148  613222 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:01:05.227213  613222 kubeadm.go:319] CGROUPS_IO: enabled
	I1115 10:01:05.227278  613222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:01:05.227493  613222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:01:05.227631  613222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:01:05.227738  613222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:01:05.229342  613222 out.go:252]   - Generating certificates and keys ...
	I1115 10:01:05.229444  613222 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:01:05.229549  613222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:01:05.229650  613222 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:01:05.229746  613222 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:01:05.229808  613222 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:01:05.229853  613222 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:01:05.229899  613222 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:01:05.230046  613222 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-679865 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:01:05.230093  613222 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:01:05.230225  613222 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-679865 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1115 10:01:05.230314  613222 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:01:05.230404  613222 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:01:05.230466  613222 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:01:05.230520  613222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:01:05.230562  613222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:01:05.230607  613222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:01:05.230658  613222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:01:05.230745  613222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:01:05.230817  613222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:01:05.230918  613222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:01:05.230993  613222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:01:05.233497  613222 out.go:252]   - Booting up control plane ...
	I1115 10:01:05.233637  613222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:01:05.233753  613222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:01:05.233846  613222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:01:05.233964  613222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:01:05.234076  613222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:01:05.234221  613222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:01:05.234337  613222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:01:05.234429  613222 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:01:05.234626  613222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:01:05.234788  613222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:01:05.234887  613222 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001039228s
	I1115 10:01:05.235013  613222 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:01:05.235135  613222 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1115 10:01:05.235253  613222 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:01:05.235368  613222 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:01:05.235481  613222 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.201867852s
	I1115 10:01:05.235599  613222 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.34858589s
	I1115 10:01:05.235703  613222 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002010763s
	I1115 10:01:05.235864  613222 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:01:05.236045  613222 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:01:05.236126  613222 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:01:05.236424  613222 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-679865 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:01:05.236510  613222 kubeadm.go:319] [bootstrap-token] Using token: y6mhy1.bl75fhqvr2mehcbf
	I1115 10:01:05.238016  613222 out.go:252]   - Configuring RBAC rules ...
	I1115 10:01:05.238106  613222 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:01:05.238199  613222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:01:05.238345  613222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:01:05.238525  613222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:01:05.238698  613222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:01:05.238797  613222 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:01:05.238906  613222 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:01:05.238969  613222 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:01:05.239048  613222 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:01:05.239059  613222 kubeadm.go:319] 
	I1115 10:01:05.239147  613222 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:01:05.239156  613222 kubeadm.go:319] 
	I1115 10:01:05.239277  613222 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:01:05.239293  613222 kubeadm.go:319] 
	I1115 10:01:05.239337  613222 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:01:05.239440  613222 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:01:05.239524  613222 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:01:05.239536  613222 kubeadm.go:319] 
	I1115 10:01:05.239611  613222 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:01:05.239623  613222 kubeadm.go:319] 
	I1115 10:01:05.239684  613222 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:01:05.239696  613222 kubeadm.go:319] 
	I1115 10:01:05.239756  613222 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:01:05.239846  613222 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:01:05.239938  613222 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:01:05.239949  613222 kubeadm.go:319] 
	I1115 10:01:05.240068  613222 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:01:05.240183  613222 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:01:05.240194  613222 kubeadm.go:319] 
	I1115 10:01:05.240311  613222 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token y6mhy1.bl75fhqvr2mehcbf \
	I1115 10:01:05.240474  613222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac \
	I1115 10:01:05.240506  613222 kubeadm.go:319] 	--control-plane 
	I1115 10:01:05.240515  613222 kubeadm.go:319] 
	I1115 10:01:05.240621  613222 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:01:05.240629  613222 kubeadm.go:319] 
	I1115 10:01:05.240763  613222 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token y6mhy1.bl75fhqvr2mehcbf \
	I1115 10:01:05.240946  613222 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac 
	I1115 10:01:05.240962  613222 cni.go:84] Creating CNI manager for ""
	I1115 10:01:05.240971  613222 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:01:05.242357  613222 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:01:02.851583  619148 out.go:252] * Updating the running docker "cert-expiration-341243" container ...
	I1115 10:01:02.851608  619148 machine.go:94] provisionDockerMachine start ...
	I1115 10:01:02.851676  619148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341243
	I1115 10:01:02.869930  619148 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:02.870186  619148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1115 10:01:02.870192  619148 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:01:03.004699  619148 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-341243
	
	I1115 10:01:03.004726  619148 ubuntu.go:182] provisioning hostname "cert-expiration-341243"
	I1115 10:01:03.004787  619148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341243
	I1115 10:01:03.023712  619148 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:03.023935  619148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1115 10:01:03.023943  619148 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-341243 && echo "cert-expiration-341243" | sudo tee /etc/hostname
	I1115 10:01:03.172654  619148 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-341243
	
	I1115 10:01:03.172730  619148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341243
	I1115 10:01:03.194956  619148 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:03.195200  619148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1115 10:01:03.195213  619148 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-341243' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-341243/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-341243' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:01:03.335204  619148 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:01:03.335227  619148 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 10:01:03.335263  619148 ubuntu.go:190] setting up certificates
	I1115 10:01:03.335277  619148 provision.go:84] configureAuth start
	I1115 10:01:03.335338  619148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-341243
	I1115 10:01:03.359251  619148 provision.go:143] copyHostCerts
	I1115 10:01:03.359313  619148 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 10:01:03.359327  619148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 10:01:03.359380  619148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 10:01:03.359522  619148 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 10:01:03.359529  619148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 10:01:03.359567  619148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 10:01:03.359645  619148 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 10:01:03.359649  619148 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 10:01:03.359676  619148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 10:01:03.359732  619148 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-341243 san=[127.0.0.1 192.168.94.2 cert-expiration-341243 localhost minikube]
	I1115 10:01:03.503895  619148 provision.go:177] copyRemoteCerts
	I1115 10:01:03.503954  619148 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:01:03.503988  619148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341243
	I1115 10:01:03.528198  619148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/cert-expiration-341243/id_rsa Username:docker}
	I1115 10:01:03.634474  619148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:01:03.658414  619148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1115 10:01:03.682547  619148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:01:03.707153  619148 provision.go:87] duration metric: took 371.859966ms to configureAuth
	I1115 10:01:03.707176  619148 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:01:03.707403  619148 config.go:182] Loaded profile config "cert-expiration-341243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:03.707703  619148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341243
	I1115 10:01:03.729783  619148 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:03.730147  619148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1115 10:01:03.730165  619148 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:01:04.044710  619148 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:01:04.044742  619148 machine.go:97] duration metric: took 1.193125558s to provisionDockerMachine
	I1115 10:01:04.044754  619148 start.go:293] postStartSetup for "cert-expiration-341243" (driver="docker")
	I1115 10:01:04.044768  619148 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:01:04.044841  619148 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:01:04.044884  619148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341243
	I1115 10:01:04.069542  619148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/cert-expiration-341243/id_rsa Username:docker}
	I1115 10:01:04.166047  619148 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:01:04.170979  619148 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:01:04.171010  619148 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:01:04.171019  619148 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 10:01:04.171068  619148 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 10:01:04.171162  619148 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 10:01:04.171261  619148 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:01:04.181507  619148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:01:04.201355  619148 start.go:296] duration metric: took 156.573256ms for postStartSetup
	I1115 10:01:04.201534  619148 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:01:04.201572  619148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341243
	I1115 10:01:04.223006  619148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/cert-expiration-341243/id_rsa Username:docker}
	I1115 10:01:04.328447  619148 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:01:04.334109  619148 fix.go:56] duration metric: took 1.501241331s for fixHost
	I1115 10:01:04.334129  619148 start.go:83] releasing machines lock for "cert-expiration-341243", held for 1.50128068s
	I1115 10:01:04.334193  619148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-341243
	I1115 10:01:04.357045  619148 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:01:04.357089  619148 ssh_runner.go:195] Run: cat /version.json
	I1115 10:01:04.357112  619148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341243
	I1115 10:01:04.357136  619148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-341243
	I1115 10:01:04.378599  619148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/cert-expiration-341243/id_rsa Username:docker}
	I1115 10:01:04.379221  619148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/cert-expiration-341243/id_rsa Username:docker}
	I1115 10:01:04.544248  619148 ssh_runner.go:195] Run: systemctl --version
	I1115 10:01:04.551182  619148 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:01:04.589923  619148 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:01:04.602971  619148 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:01:04.603034  619148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:01:04.615331  619148 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:01:04.615346  619148 start.go:496] detecting cgroup driver to use...
	I1115 10:01:04.615380  619148 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 10:01:04.615440  619148 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:01:04.634598  619148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:01:04.651246  619148 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:01:04.651299  619148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:01:04.674301  619148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:01:04.689505  619148 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:01:04.813850  619148 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:01:04.937873  619148 docker.go:234] disabling docker service ...
	I1115 10:01:04.937942  619148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:01:04.953794  619148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:01:04.966376  619148 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:01:05.083363  619148 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:01:05.204672  619148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:01:05.218764  619148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:01:05.235868  619148 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:01:05.235925  619148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:05.246316  619148 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 10:01:05.246367  619148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:05.256200  619148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:05.266088  619148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:05.275746  619148 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:01:05.285115  619148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:05.294867  619148 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:05.304161  619148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:05.313256  619148 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:01:05.321947  619148 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:01:05.330000  619148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:05.469337  619148 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:01:05.693588  619148 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:01:05.693661  619148 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:01:05.698866  619148 start.go:564] Will wait 60s for crictl version
	I1115 10:01:05.698922  619148 ssh_runner.go:195] Run: which crictl
	I1115 10:01:05.703374  619148 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:01:05.732586  619148 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:01:05.732671  619148 ssh_runner.go:195] Run: crio --version
	I1115 10:01:05.764278  619148 ssh_runner.go:195] Run: crio --version
	I1115 10:01:05.798196  619148 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Nov 15 10:00:53 embed-certs-430513 crio[787]: time="2025-11-15T10:00:53.268862553Z" level=info msg="Starting container: 3639f814c9430061f2643fe9b22a2b56e0347f09fc1dbeb05ed6f75d099d3e1f" id=3e883bdf-3061-4a0e-a6fe-e44c847fe564 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:00:53 embed-certs-430513 crio[787]: time="2025-11-15T10:00:53.271118129Z" level=info msg="Started container" PID=1837 containerID=3639f814c9430061f2643fe9b22a2b56e0347f09fc1dbeb05ed6f75d099d3e1f description=kube-system/coredns-66bc5c9577-6gvgh/coredns id=3e883bdf-3061-4a0e-a6fe-e44c847fe564 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e1b947f482aae978611f599e8eb42dd8430ea1cf3c34164ec970f3f6a287e430
	Nov 15 10:00:56 embed-certs-430513 crio[787]: time="2025-11-15T10:00:56.820228684Z" level=info msg="Running pod sandbox: default/busybox/POD" id=fb7764a0-b534-4279-bcd9-a83e736b7091 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:00:56 embed-certs-430513 crio[787]: time="2025-11-15T10:00:56.820340033Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:00:56 embed-certs-430513 crio[787]: time="2025-11-15T10:00:56.825632654Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2a326273bf5e34b45cb557cda82a4e3db4c73c280520d4d8400e0868842404b9 UID:e3cc26c8-a3a0-4086-9b89-4cc9281a80ab NetNS:/var/run/netns/159185e3-b714-42d2-b626-6a03f0741cbc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d8a518}] Aliases:map[]}"
	Nov 15 10:00:56 embed-certs-430513 crio[787]: time="2025-11-15T10:00:56.825669419Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 10:00:56 embed-certs-430513 crio[787]: time="2025-11-15T10:00:56.838293125Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2a326273bf5e34b45cb557cda82a4e3db4c73c280520d4d8400e0868842404b9 UID:e3cc26c8-a3a0-4086-9b89-4cc9281a80ab NetNS:/var/run/netns/159185e3-b714-42d2-b626-6a03f0741cbc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d8a518}] Aliases:map[]}"
	Nov 15 10:00:56 embed-certs-430513 crio[787]: time="2025-11-15T10:00:56.838515395Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 10:00:56 embed-certs-430513 crio[787]: time="2025-11-15T10:00:56.839709683Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 15 10:00:56 embed-certs-430513 crio[787]: time="2025-11-15T10:00:56.841069587Z" level=info msg="Ran pod sandbox 2a326273bf5e34b45cb557cda82a4e3db4c73c280520d4d8400e0868842404b9 with infra container: default/busybox/POD" id=fb7764a0-b534-4279-bcd9-a83e736b7091 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:00:56 embed-certs-430513 crio[787]: time="2025-11-15T10:00:56.842528734Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8467a3f8-b68e-436e-8540-f467dde5b56b name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:56 embed-certs-430513 crio[787]: time="2025-11-15T10:00:56.842714872Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8467a3f8-b68e-436e-8540-f467dde5b56b name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:56 embed-certs-430513 crio[787]: time="2025-11-15T10:00:56.842755402Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=8467a3f8-b68e-436e-8540-f467dde5b56b name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:00:56 embed-certs-430513 crio[787]: time="2025-11-15T10:00:56.843557851Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=effae120-c4b8-4511-9f24-eba70dd70adb name=/runtime.v1.ImageService/PullImage
	Nov 15 10:00:56 embed-certs-430513 crio[787]: time="2025-11-15T10:00:56.846941539Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 10:01:00 embed-certs-430513 crio[787]: time="2025-11-15T10:01:00.194606899Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=effae120-c4b8-4511-9f24-eba70dd70adb name=/runtime.v1.ImageService/PullImage
	Nov 15 10:01:00 embed-certs-430513 crio[787]: time="2025-11-15T10:01:00.195489659Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7d014700-95d4-4d41-8791-7057f676dc3e name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:00 embed-certs-430513 crio[787]: time="2025-11-15T10:01:00.19693687Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7fb3f209-ba3e-4c47-afa7-3e682d40b0f8 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:00 embed-certs-430513 crio[787]: time="2025-11-15T10:01:00.202934072Z" level=info msg="Creating container: default/busybox/busybox" id=e05b2db6-86ca-44d0-b939-19ae2ad09848 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:00 embed-certs-430513 crio[787]: time="2025-11-15T10:01:00.203082485Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:00 embed-certs-430513 crio[787]: time="2025-11-15T10:01:00.210022257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:00 embed-certs-430513 crio[787]: time="2025-11-15T10:01:00.213044503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:00 embed-certs-430513 crio[787]: time="2025-11-15T10:01:00.238838678Z" level=info msg="Created container 345cea164bc169998c82853647146df3f6f716b3c9f749d6cd748decc8d18e69: default/busybox/busybox" id=e05b2db6-86ca-44d0-b939-19ae2ad09848 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:00 embed-certs-430513 crio[787]: time="2025-11-15T10:01:00.239785532Z" level=info msg="Starting container: 345cea164bc169998c82853647146df3f6f716b3c9f749d6cd748decc8d18e69" id=3d74cdc8-49a9-4eb7-81fc-e364e2d56545 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:01:00 embed-certs-430513 crio[787]: time="2025-11-15T10:01:00.242497505Z" level=info msg="Started container" PID=1915 containerID=345cea164bc169998c82853647146df3f6f716b3c9f749d6cd748decc8d18e69 description=default/busybox/busybox id=3d74cdc8-49a9-4eb7-81fc-e364e2d56545 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2a326273bf5e34b45cb557cda82a4e3db4c73c280520d4d8400e0868842404b9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	345cea164bc16       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   2a326273bf5e3       busybox                                      default
	3639f814c9430       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   e1b947f482aae       coredns-66bc5c9577-6gvgh                     kube-system
	ae733e929e923       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   b10256aeb277a       storage-provisioner                          kube-system
	241a97e6367c2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      25 seconds ago      Running             kindnet-cni               0                   faf3a3fc63c88       kindnet-h26k6                                kube-system
	2735f394b5b5e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   e5abbead6a40d       kube-proxy-kd7wd                             kube-system
	1ee791a607528       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   0b237a1765fef       kube-scheduler-embed-certs-430513            kube-system
	dbd878659f64f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   c239cbb208ade       kube-apiserver-embed-certs-430513            kube-system
	c0752cf5dbf07       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   e2fdf4fad26fc       kube-controller-manager-embed-certs-430513   kube-system
	163d97046eb3f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   0e804923b8dee       etcd-embed-certs-430513                      kube-system
	
	
	==> coredns [3639f814c9430061f2643fe9b22a2b56e0347f09fc1dbeb05ed6f75d099d3e1f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41191 - 10474 "HINFO IN 469990034654734875.7765989446705505165. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.017848022s
	
	
	==> describe nodes <==
	Name:               embed-certs-430513
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-430513
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=embed-certs-430513
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_00_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:00:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-430513
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:01:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:01:07 +0000   Sat, 15 Nov 2025 10:00:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:01:07 +0000   Sat, 15 Nov 2025 10:00:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:01:07 +0000   Sat, 15 Nov 2025 10:00:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:01:07 +0000   Sat, 15 Nov 2025 10:00:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-430513
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                5e71a89d-4318-4931-9ea5-663742f9579f
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-6gvgh                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-embed-certs-430513                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-h26k6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-430513             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-430513    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-kd7wd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-430513             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node embed-certs-430513 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node embed-certs-430513 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node embed-certs-430513 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node embed-certs-430513 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node embed-certs-430513 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node embed-certs-430513 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node embed-certs-430513 event: Registered Node embed-certs-430513 in Controller
	  Normal  NodeReady                15s                kubelet          Node embed-certs-430513 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [163d97046eb3fe1df03231107eb162e320f130b1389563ad8f8c3d2388e98382] <==
	{"level":"warn","ts":"2025-11-15T10:00:32.918142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:32.927511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:32.934245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:32.940728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:32.947475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:32.953844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:32.960714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:32.967145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:32.975099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:32.984559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:32.992014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:32.998732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:33.005226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:33.012615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:33.026949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:33.032357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:33.039536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:33.046471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:33.052959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:33.059897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:33.066875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:33.086453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:33.093277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:33.100517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:00:33.154892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44866","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:01:07 up  1:43,  0 user,  load average: 4.17, 2.77, 1.85
	Linux embed-certs-430513 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [241a97e6367c2a021af40875d9705fca538bc797e2c83913fb2ea2165f9a3b61] <==
	I1115 10:00:42.122064       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:00:42.122411       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:00:42.122608       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:00:42.122628       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:00:42.122652       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:00:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:00:42.367013       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:00:42.367071       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:00:42.367087       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:00:42.367230       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:00:42.767902       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:00:42.767926       1 metrics.go:72] Registering metrics
	I1115 10:00:42.767980       1 controller.go:711] "Syncing nftables rules"
	I1115 10:00:52.369985       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:00:52.370024       1 main.go:301] handling current node
	I1115 10:01:02.367537       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:01:02.367567       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dbd878659f64f2165fd1271853648ab7250d838204b41422ba315c3259cf70bf] <==
	E1115 10:00:33.730989       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1115 10:00:33.754591       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:00:33.758797       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:00:33.759021       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 10:00:33.764915       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:00:33.765141       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:00:33.934618       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:00:34.556788       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 10:00:34.560798       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 10:00:34.560818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:00:35.079762       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:00:35.118699       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:00:35.160267       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 10:00:35.166331       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1115 10:00:35.167293       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:00:35.171634       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:00:35.583664       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:00:36.321527       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:00:36.331614       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 10:00:36.338819       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:00:41.238311       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:00:41.242100       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:00:41.538245       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1115 10:00:41.587484       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1115 10:01:05.646154       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:33442: use of closed network connection
	
	
	==> kube-controller-manager [c0752cf5dbf079a8b04e812afbc085ca81b13579714f95abe8a50fba999f1aba] <==
	I1115 10:00:40.561946       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:00:40.582494       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:00:40.583320       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:00:40.583361       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:00:40.583447       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:00:40.584219       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:00:40.584279       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:00:40.584301       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:00:40.584317       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:00:40.584350       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:00:40.584420       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:00:40.584505       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 10:00:40.584577       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:00:40.584615       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:00:40.584656       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:00:40.584747       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:00:40.584779       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:00:40.585230       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:00:40.587116       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:00:40.587296       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:00:40.592749       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:00:40.593905       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:00:40.599664       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:00:40.621261       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:00:55.537262       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2735f394b5b5eaca0dd5ba8073c06823770e3796041f2c5d94e8ff691534286b] <==
	I1115 10:00:41.973532       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:00:42.040521       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:00:42.141118       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:00:42.141170       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:00:42.141264       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:00:42.161122       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:00:42.161264       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:00:42.167656       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:00:42.168056       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:00:42.168127       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:00:42.169667       1 config.go:200] "Starting service config controller"
	I1115 10:00:42.170137       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:00:42.169774       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:00:42.170248       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:00:42.169804       1 config.go:309] "Starting node config controller"
	I1115 10:00:42.170268       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:00:42.170276       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:00:42.169790       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:00:42.170285       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:00:42.270694       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:00:42.270708       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:00:42.270743       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1ee791a607528735c1f471657e7e2e8fa29a0ecad8cf19fce2e727e5e8b19be6] <==
	E1115 10:00:33.615826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:00:33.615928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:00:33.615930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:00:33.615991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:00:33.616029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:00:33.616103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:00:33.617234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:00:33.617302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:00:33.617814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:00:33.617911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:00:33.618319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:00:33.618355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:00:33.618532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:00:33.618575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:00:33.618789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:00:34.460090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:00:34.533655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:00:34.541071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:00:34.577249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:00:34.734638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:00:34.741869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 10:00:34.751119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:00:34.823751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:00:34.829820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1115 10:00:37.011320       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:00:37 embed-certs-430513 kubelet[1318]: E1115 10:00:37.205921    1318 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-embed-certs-430513\" already exists" pod="kube-system/kube-controller-manager-embed-certs-430513"
	Nov 15 10:00:37 embed-certs-430513 kubelet[1318]: E1115 10:00:37.206807    1318 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-embed-certs-430513\" already exists" pod="kube-system/kube-scheduler-embed-certs-430513"
	Nov 15 10:00:37 embed-certs-430513 kubelet[1318]: I1115 10:00:37.217412    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-430513" podStartSLOduration=1.2173792159999999 podStartE2EDuration="1.217379216s" podCreationTimestamp="2025-11-15 10:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:00:37.205517424 +0000 UTC m=+1.121899033" watchObservedRunningTime="2025-11-15 10:00:37.217379216 +0000 UTC m=+1.133760824"
	Nov 15 10:00:37 embed-certs-430513 kubelet[1318]: I1115 10:00:37.230771    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-430513" podStartSLOduration=1.230729339 podStartE2EDuration="1.230729339s" podCreationTimestamp="2025-11-15 10:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:00:37.217767296 +0000 UTC m=+1.134148896" watchObservedRunningTime="2025-11-15 10:00:37.230729339 +0000 UTC m=+1.147110947"
	Nov 15 10:00:37 embed-certs-430513 kubelet[1318]: I1115 10:00:37.244788    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-430513" podStartSLOduration=1.244762736 podStartE2EDuration="1.244762736s" podCreationTimestamp="2025-11-15 10:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:00:37.231066035 +0000 UTC m=+1.147447643" watchObservedRunningTime="2025-11-15 10:00:37.244762736 +0000 UTC m=+1.161144343"
	Nov 15 10:00:40 embed-certs-430513 kubelet[1318]: I1115 10:00:40.604566    1318 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 15 10:00:40 embed-certs-430513 kubelet[1318]: I1115 10:00:40.605262    1318 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 10:00:41 embed-certs-430513 kubelet[1318]: I1115 10:00:41.589442    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27ddf833-a045-40a5-9220-9cbae8dd4875-kube-proxy\") pod \"kube-proxy-kd7wd\" (UID: \"27ddf833-a045-40a5-9220-9cbae8dd4875\") " pod="kube-system/kube-proxy-kd7wd"
	Nov 15 10:00:41 embed-certs-430513 kubelet[1318]: I1115 10:00:41.589492    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27ddf833-a045-40a5-9220-9cbae8dd4875-xtables-lock\") pod \"kube-proxy-kd7wd\" (UID: \"27ddf833-a045-40a5-9220-9cbae8dd4875\") " pod="kube-system/kube-proxy-kd7wd"
	Nov 15 10:00:41 embed-certs-430513 kubelet[1318]: I1115 10:00:41.589520    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27ddf833-a045-40a5-9220-9cbae8dd4875-lib-modules\") pod \"kube-proxy-kd7wd\" (UID: \"27ddf833-a045-40a5-9220-9cbae8dd4875\") " pod="kube-system/kube-proxy-kd7wd"
	Nov 15 10:00:41 embed-certs-430513 kubelet[1318]: I1115 10:00:41.589552    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54ksh\" (UniqueName: \"kubernetes.io/projected/27ddf833-a045-40a5-9220-9cbae8dd4875-kube-api-access-54ksh\") pod \"kube-proxy-kd7wd\" (UID: \"27ddf833-a045-40a5-9220-9cbae8dd4875\") " pod="kube-system/kube-proxy-kd7wd"
	Nov 15 10:00:41 embed-certs-430513 kubelet[1318]: I1115 10:00:41.589584    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/01c61aeb-fa93-4a50-b032-f52dbb9215a4-cni-cfg\") pod \"kindnet-h26k6\" (UID: \"01c61aeb-fa93-4a50-b032-f52dbb9215a4\") " pod="kube-system/kindnet-h26k6"
	Nov 15 10:00:41 embed-certs-430513 kubelet[1318]: I1115 10:00:41.589602    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01c61aeb-fa93-4a50-b032-f52dbb9215a4-xtables-lock\") pod \"kindnet-h26k6\" (UID: \"01c61aeb-fa93-4a50-b032-f52dbb9215a4\") " pod="kube-system/kindnet-h26k6"
	Nov 15 10:00:41 embed-certs-430513 kubelet[1318]: I1115 10:00:41.589624    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01c61aeb-fa93-4a50-b032-f52dbb9215a4-lib-modules\") pod \"kindnet-h26k6\" (UID: \"01c61aeb-fa93-4a50-b032-f52dbb9215a4\") " pod="kube-system/kindnet-h26k6"
	Nov 15 10:00:41 embed-certs-430513 kubelet[1318]: I1115 10:00:41.589644    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc9hv\" (UniqueName: \"kubernetes.io/projected/01c61aeb-fa93-4a50-b032-f52dbb9215a4-kube-api-access-gc9hv\") pod \"kindnet-h26k6\" (UID: \"01c61aeb-fa93-4a50-b032-f52dbb9215a4\") " pod="kube-system/kindnet-h26k6"
	Nov 15 10:00:42 embed-certs-430513 kubelet[1318]: I1115 10:00:42.218273    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-h26k6" podStartSLOduration=1.218251101 podStartE2EDuration="1.218251101s" podCreationTimestamp="2025-11-15 10:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:00:42.218187279 +0000 UTC m=+6.134568897" watchObservedRunningTime="2025-11-15 10:00:42.218251101 +0000 UTC m=+6.134632710"
	Nov 15 10:00:42 embed-certs-430513 kubelet[1318]: I1115 10:00:42.238501    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kd7wd" podStartSLOduration=1.238476991 podStartE2EDuration="1.238476991s" podCreationTimestamp="2025-11-15 10:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:00:42.238308518 +0000 UTC m=+6.154690153" watchObservedRunningTime="2025-11-15 10:00:42.238476991 +0000 UTC m=+6.154858599"
	Nov 15 10:00:52 embed-certs-430513 kubelet[1318]: I1115 10:00:52.883017    1318 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 10:00:53 embed-certs-430513 kubelet[1318]: I1115 10:00:53.073563    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/605418c0-0b25-478e-bc97-875523469f50-config-volume\") pod \"coredns-66bc5c9577-6gvgh\" (UID: \"605418c0-0b25-478e-bc97-875523469f50\") " pod="kube-system/coredns-66bc5c9577-6gvgh"
	Nov 15 10:00:53 embed-certs-430513 kubelet[1318]: I1115 10:00:53.073618    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a1e774e7-a59e-4d09-abca-2a71de44c919-tmp\") pod \"storage-provisioner\" (UID: \"a1e774e7-a59e-4d09-abca-2a71de44c919\") " pod="kube-system/storage-provisioner"
	Nov 15 10:00:53 embed-certs-430513 kubelet[1318]: I1115 10:00:53.073648    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k57jq\" (UniqueName: \"kubernetes.io/projected/a1e774e7-a59e-4d09-abca-2a71de44c919-kube-api-access-k57jq\") pod \"storage-provisioner\" (UID: \"a1e774e7-a59e-4d09-abca-2a71de44c919\") " pod="kube-system/storage-provisioner"
	Nov 15 10:00:53 embed-certs-430513 kubelet[1318]: I1115 10:00:53.073678    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx58p\" (UniqueName: \"kubernetes.io/projected/605418c0-0b25-478e-bc97-875523469f50-kube-api-access-wx58p\") pod \"coredns-66bc5c9577-6gvgh\" (UID: \"605418c0-0b25-478e-bc97-875523469f50\") " pod="kube-system/coredns-66bc5c9577-6gvgh"
	Nov 15 10:00:54 embed-certs-430513 kubelet[1318]: I1115 10:00:54.262599    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6gvgh" podStartSLOduration=13.26257932 podStartE2EDuration="13.26257932s" podCreationTimestamp="2025-11-15 10:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:00:54.261826901 +0000 UTC m=+18.178208524" watchObservedRunningTime="2025-11-15 10:00:54.26257932 +0000 UTC m=+18.178960927"
	Nov 15 10:00:56 embed-certs-430513 kubelet[1318]: I1115 10:00:56.511202    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.511176596 podStartE2EDuration="15.511176596s" podCreationTimestamp="2025-11-15 10:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:00:54.286605242 +0000 UTC m=+18.202986849" watchObservedRunningTime="2025-11-15 10:00:56.511176596 +0000 UTC m=+20.427558204"
	Nov 15 10:00:56 embed-certs-430513 kubelet[1318]: I1115 10:00:56.694281    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjpqn\" (UniqueName: \"kubernetes.io/projected/e3cc26c8-a3a0-4086-9b89-4cc9281a80ab-kube-api-access-bjpqn\") pod \"busybox\" (UID: \"e3cc26c8-a3a0-4086-9b89-4cc9281a80ab\") " pod="default/busybox"
	
	
	==> storage-provisioner [ae733e929e92353d08f499578c07b72b2e059b3ddccd5afad207940dc0305ef3] <==
	I1115 10:00:53.279601       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:00:53.292853       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:00:53.293066       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:00:53.295764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:53.303267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:00:53.303537       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:00:53.303776       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-430513_3c4fcac0-54a2-4523-aea8-1212e85eb26d!
	I1115 10:00:53.304110       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd082bdf-d760-43e6-b6b6-335a4fbc7891", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-430513_3c4fcac0-54a2-4523-aea8-1212e85eb26d became leader
	W1115 10:00:53.307722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:53.314041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:00:53.403885       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-430513_3c4fcac0-54a2-4523-aea8-1212e85eb26d!
	W1115 10:00:55.317176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:55.321872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:57.324964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:57.329466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:59.333077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:00:59.377180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:01.380922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:01.386141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:03.389450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:03.393421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:05.396145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:05.399867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:07.403224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:07.407462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-430513 -n embed-certs-430513
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-430513 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-783113 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-783113 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (267.9535ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:01:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-783113 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-783113
helpers_test.go:243: (dbg) docker inspect newest-cni-783113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940",
	        "Created": "2025-11-15T10:01:00.281154454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 618374,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:01:00.333887793Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940/hostname",
	        "HostsPath": "/var/lib/docker/containers/0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940/hosts",
	        "LogPath": "/var/lib/docker/containers/0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940/0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940-json.log",
	        "Name": "/newest-cni-783113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-783113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-783113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940",
	                "LowerDir": "/var/lib/docker/overlay2/adf1e197b96e4bdc3adefbdfad4bf35a60d874784fe2ff099ee9fda65e08bccc-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/adf1e197b96e4bdc3adefbdfad4bf35a60d874784fe2ff099ee9fda65e08bccc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/adf1e197b96e4bdc3adefbdfad4bf35a60d874784fe2ff099ee9fda65e08bccc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/adf1e197b96e4bdc3adefbdfad4bf35a60d874784fe2ff099ee9fda65e08bccc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-783113",
	                "Source": "/var/lib/docker/volumes/newest-cni-783113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-783113",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-783113",
	                "name.minikube.sigs.k8s.io": "newest-cni-783113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "57f89a4406285e488225e1a65efcd969242f96760115861ec62117d919374a0f",
	            "SandboxKey": "/var/run/docker/netns/57f89a440628",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-783113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5154d9a0ce32378165efc274699868177016a3c20c41bacb01c1c35fc0b5949c",
	                    "EndpointID": "3c833c2c4d3ca381d620684fc5e224e9685b23b9c2ac68039f7d1e965a656110",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "3e:66:43:c8:92:64",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-783113",
	                        "0ac6b2197ead"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-783113 -n newest-cni-783113
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-783113 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-559401 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 09:59 UTC │
	│ start   │ -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 09:59 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-405833    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ start   │ -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-405833    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p kubernetes-upgrade-405833                                                                                                                                                                                                                  │ kubernetes-upgrade-405833    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p embed-certs-430513 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ image   │ old-k8s-version-335655 image list --format=json                                                                                                                                                                                               │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ pause   │ -p old-k8s-version-335655 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ delete  │ -p old-k8s-version-335655                                                                                                                                                                                                                     │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p old-k8s-version-335655                                                                                                                                                                                                                     │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p disable-driver-mounts-553319                                                                                                                                                                                                               │ disable-driver-mounts-553319 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p default-k8s-diff-port-679865 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:01 UTC │
	│ image   │ no-preload-559401 image list --format=json                                                                                                                                                                                                    │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ pause   │ -p no-preload-559401 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ delete  │ -p no-preload-559401                                                                                                                                                                                                                          │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p no-preload-559401                                                                                                                                                                                                                          │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p newest-cni-783113 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:01 UTC │
	│ start   │ -p cert-expiration-341243 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-341243       │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-430513 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ stop    │ -p embed-certs-430513 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ delete  │ -p cert-expiration-341243                                                                                                                                                                                                                     │ cert-expiration-341243       │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ start   │ -p auto-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-430513 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ start   │ -p embed-certs-430513 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-783113 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:01:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:01:25.112628  625726 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:01:25.112973  625726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:01:25.112986  625726 out.go:374] Setting ErrFile to fd 2...
	I1115 10:01:25.112993  625726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:01:25.113335  625726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:01:25.113943  625726 out.go:368] Setting JSON to false
	I1115 10:01:25.115245  625726 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6226,"bootTime":1763194659,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:01:25.115342  625726 start.go:143] virtualization: kvm guest
	I1115 10:01:25.117498  625726 out.go:179] * [embed-certs-430513] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:01:25.119255  625726 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:01:25.119290  625726 notify.go:221] Checking for updates...
	I1115 10:01:25.122167  625726 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:01:25.124001  625726 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:01:25.125177  625726 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 10:01:25.126201  625726 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:01:25.127311  625726 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:01:25.095647  617563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:25.183365  617563 kubeadm.go:1114] duration metric: took 4.189000358s to wait for elevateKubeSystemPrivileges
	I1115 10:01:25.183421  617563 kubeadm.go:403] duration metric: took 18.776487561s to StartCluster
	I1115 10:01:25.183484  617563 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:25.183665  617563 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:01:25.185821  617563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:25.186102  617563 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:01:25.186132  617563 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:01:25.186258  617563 config.go:182] Loaded profile config "newest-cni-783113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:25.186211  617563 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:01:25.186493  617563 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-783113"
	I1115 10:01:25.186503  617563 addons.go:70] Setting default-storageclass=true in profile "newest-cni-783113"
	I1115 10:01:25.186520  617563 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-783113"
	I1115 10:01:25.186524  617563 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-783113"
	I1115 10:01:25.186578  617563 host.go:66] Checking if "newest-cni-783113" exists ...
	I1115 10:01:25.186928  617563 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:25.187416  617563 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:25.194611  617563 out.go:179] * Verifying Kubernetes components...
	I1115 10:01:25.196156  617563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:25.218378  617563 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:01:25.128883  625726 config.go:182] Loaded profile config "embed-certs-430513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:25.129500  625726 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:01:25.157365  625726 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:01:25.157485  625726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:01:25.256739  625726 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:01:25.227745081 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:01:25.256924  625726 docker.go:319] overlay module found
	I1115 10:01:25.259326  625726 out.go:179] * Using the docker driver based on existing profile
	I1115 10:01:25.260508  625726 start.go:309] selected driver: docker
	I1115 10:01:25.260541  625726 start.go:930] validating driver "docker" against &{Name:embed-certs-430513 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:01:25.260671  625726 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:01:25.261609  625726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:01:25.351025  625726 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:01:25.337708964 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:01:25.351357  625726 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:01:25.351469  625726 cni.go:84] Creating CNI manager for ""
	I1115 10:01:25.351573  625726 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:01:25.351654  625726 start.go:353] cluster config:
	{Name:embed-certs-430513 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:01:25.353940  625726 out.go:179] * Starting "embed-certs-430513" primary control-plane node in "embed-certs-430513" cluster
	I1115 10:01:25.355866  625726 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:01:25.357125  625726 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:01:25.358288  625726 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:01:25.358326  625726 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:01:25.358361  625726 cache.go:65] Caching tarball of preloaded images
	I1115 10:01:25.358367  625726 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:01:25.358518  625726 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:01:25.358536  625726 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:01:25.358681  625726 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/config.json ...
	I1115 10:01:25.385513  625726 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:01:25.385538  625726 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:01:25.385557  625726 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:01:25.385594  625726 start.go:360] acquireMachinesLock for embed-certs-430513: {Name:mk23e9dcdc23745b328473e6d9e82c519bc86048 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:01:25.385659  625726 start.go:364] duration metric: took 40.262µs to acquireMachinesLock for "embed-certs-430513"
	I1115 10:01:25.385682  625726 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:01:25.385689  625726 fix.go:54] fixHost starting: 
	I1115 10:01:25.385973  625726 cli_runner.go:164] Run: docker container inspect embed-certs-430513 --format={{.State.Status}}
	I1115 10:01:25.409937  625726 fix.go:112] recreateIfNeeded on embed-certs-430513: state=Stopped err=<nil>
	W1115 10:01:25.409975  625726 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:01:25.220283  617563 addons.go:239] Setting addon default-storageclass=true in "newest-cni-783113"
	I1115 10:01:25.220302  617563 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:01:25.220320  617563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:01:25.220332  617563 host.go:66] Checking if "newest-cni-783113" exists ...
	I1115 10:01:25.220379  617563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:25.220854  617563 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:25.255599  617563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:25.258165  617563 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:01:25.258185  617563 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:01:25.258255  617563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:25.290346  617563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:25.318989  617563 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:01:25.373946  617563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:01:25.383819  617563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:01:25.414962  617563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:01:25.540082  617563 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1115 10:01:25.543281  617563 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:01:25.544097  617563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:01:25.766322  617563 api_server.go:72] duration metric: took 580.15746ms to wait for apiserver process to appear ...
	I1115 10:01:25.766352  617563 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:01:25.766374  617563 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:01:25.773085  617563 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:01:25.774174  617563 api_server.go:141] control plane version: v1.34.1
	I1115 10:01:25.774204  617563 api_server.go:131] duration metric: took 7.844461ms to wait for apiserver health ...
	I1115 10:01:25.774215  617563 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:01:25.778999  617563 system_pods.go:59] 8 kube-system pods found
	I1115 10:01:25.779046  617563 system_pods.go:61] "coredns-66bc5c9577-87x7w" [3f2d84f5-7f97-4a19-b552-0575a9ceb536] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:01:25.779064  617563 system_pods.go:61] "etcd-newest-cni-783113" [2ea0aa42-7852-499c-8e8e-c5e1cfeb5707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:01:25.779076  617563 system_pods.go:61] "kindnet-zjdf2" [f7a3d406-4576-45ea-a09e-00df6579f9df] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 10:01:25.779098  617563 system_pods.go:61] "kube-apiserver-newest-cni-783113" [2313995d-c79b-4e18-8b97-3463f3d95a8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:01:25.779107  617563 system_pods.go:61] "kube-controller-manager-newest-cni-783113" [d3439ed1-3ef3-4865-9ff8-42c82ac3cfc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:01:25.779114  617563 system_pods.go:61] "kube-proxy-bqp7j" [19ca680a-9bd3-4943-842b-7ef042aa6e0e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:01:25.779122  617563 system_pods.go:61] "kube-scheduler-newest-cni-783113" [8feea409-ed92-4a4d-8df7-39898903b818] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:01:25.779128  617563 system_pods.go:61] "storage-provisioner" [830eb5ed-8939-4ca1-a08d-440456d95a53] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:01:25.779137  617563 system_pods.go:74] duration metric: took 4.91397ms to wait for pod list to return data ...
	I1115 10:01:25.779149  617563 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:01:25.779876  617563 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:01:25.783886  617563 addons.go:515] duration metric: took 597.669269ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:01:25.787166  617563 default_sa.go:45] found service account: "default"
	I1115 10:01:25.787196  617563 default_sa.go:55] duration metric: took 8.038595ms for default service account to be created ...
	I1115 10:01:25.787211  617563 kubeadm.go:587] duration metric: took 601.051465ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:01:25.787254  617563 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:01:25.794786  617563 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:01:25.794835  617563 node_conditions.go:123] node cpu capacity is 8
	I1115 10:01:25.794854  617563 node_conditions.go:105] duration metric: took 7.593446ms to run NodePressure ...
	I1115 10:01:25.794870  617563 start.go:242] waiting for startup goroutines ...
	I1115 10:01:26.045286  617563 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-783113" context rescaled to 1 replicas
	I1115 10:01:26.045331  617563 start.go:247] waiting for cluster config update ...
	I1115 10:01:26.045345  617563 start.go:256] writing updated cluster config ...
	I1115 10:01:26.045751  617563 ssh_runner.go:195] Run: rm -f paused
	I1115 10:01:26.121001  617563 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:01:26.121967  617563 out.go:179] * Done! kubectl is now configured to use "newest-cni-783113" cluster and "default" namespace by default
	I1115 10:01:21.880666  622837 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:01:21.942165  622837 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:01:21.942278  622837 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-034018 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1115 10:01:22.213796  622837 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:01:22.213967  622837 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-034018 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1115 10:01:22.585970  622837 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:01:23.445050  622837 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:01:23.916320  622837 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:01:23.916474  622837 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:01:24.374123  622837 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:01:24.850628  622837 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:01:25.108781  622837 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:01:25.963201  622837 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:01:26.601494  622837 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:01:26.602035  622837 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:01:26.605917  622837 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:01:26.607519  622837 out.go:252]   - Booting up control plane ...
	I1115 10:01:26.607628  622837 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:01:26.607713  622837 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:01:26.608380  622837 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:01:26.622423  622837 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:01:26.622603  622837 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:01:26.629865  622837 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:01:26.630187  622837 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:01:26.630250  622837 kubeadm.go:319] [kubelet-start] Starting the kubelet
	
	
	==> CRI-O <==
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.776518362Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.778025375Z" level=info msg="Ran pod sandbox c2cedce20e39623ba59c0f2a591ffad00467a12f5ba99b69a6d716432723669b with infra container: kube-system/kube-proxy-bqp7j/POD" id=a8085d5c-4484-455a-9047-3e766305e23a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.77897552Z" level=info msg="Running pod sandbox: kube-system/kindnet-zjdf2/POD" id=278bd50b-0151-4310-ab2b-14ffb4438269 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.779765322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.780073571Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=5cdc10f9-e7fd-4146-b5e0-993e2ba61e17 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.787168853Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=314ccde9-f018-48a9-9293-73427936d257 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.79069318Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=278bd50b-0151-4310-ab2b-14ffb4438269 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.793958164Z" level=info msg="Creating container: kube-system/kube-proxy-bqp7j/kube-proxy" id=8ab2c397-65db-452b-8a52-66e9efea6b38 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.795818432Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.795973723Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.800362281Z" level=info msg="Ran pod sandbox aaa1da8b6a862f37853d974efa3b8f22897146df17cbcbbd60fcd98bcf51f8de with infra container: kube-system/kindnet-zjdf2/POD" id=278bd50b-0151-4310-ab2b-14ffb4438269 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.8054243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.806175097Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.811199994Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=082e180a-e482-436e-8794-cc0dcb6f62f6 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.818042958Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ca408e96-5655-4b73-a237-54690fb8ec71 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.824301644Z" level=info msg="Creating container: kube-system/kindnet-zjdf2/kindnet-cni" id=4184ef05-01c8-4d22-a79f-97bc3ff0ca4e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.824462125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.829543351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.830127524Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.851578675Z" level=info msg="Created container 568789f99c380387f6cd3400d6f8b56482e7b523df4e19e965d62dc945250654: kube-system/kindnet-zjdf2/kindnet-cni" id=4184ef05-01c8-4d22-a79f-97bc3ff0ca4e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.85209123Z" level=info msg="Created container f64515d47cc5c58b95599c709ae497520d79fb3662641c4ff305bf0358377c6f: kube-system/kube-proxy-bqp7j/kube-proxy" id=8ab2c397-65db-452b-8a52-66e9efea6b38 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.852303514Z" level=info msg="Starting container: 568789f99c380387f6cd3400d6f8b56482e7b523df4e19e965d62dc945250654" id=3cc22728-de58-4115-9b2d-78be74fc7de7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.852713401Z" level=info msg="Starting container: f64515d47cc5c58b95599c709ae497520d79fb3662641c4ff305bf0358377c6f" id=7a699ef7-df5e-401e-b973-d31fa3edf9d7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.8545463Z" level=info msg="Started container" PID=1614 containerID=568789f99c380387f6cd3400d6f8b56482e7b523df4e19e965d62dc945250654 description=kube-system/kindnet-zjdf2/kindnet-cni id=3cc22728-de58-4115-9b2d-78be74fc7de7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aaa1da8b6a862f37853d974efa3b8f22897146df17cbcbbd60fcd98bcf51f8de
	Nov 15 10:01:25 newest-cni-783113 crio[769]: time="2025-11-15T10:01:25.855865046Z" level=info msg="Started container" PID=1611 containerID=f64515d47cc5c58b95599c709ae497520d79fb3662641c4ff305bf0358377c6f description=kube-system/kube-proxy-bqp7j/kube-proxy id=7a699ef7-df5e-401e-b973-d31fa3edf9d7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2cedce20e39623ba59c0f2a591ffad00467a12f5ba99b69a6d716432723669b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	568789f99c380       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   aaa1da8b6a862       kindnet-zjdf2                               kube-system
	f64515d47cc5c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   c2cedce20e396       kube-proxy-bqp7j                            kube-system
	b1ae82fede670       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   f0ba9830d6222       etcd-newest-cni-783113                      kube-system
	c93be9d226d65       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   538c3cb2523d6       kube-scheduler-newest-cni-783113            kube-system
	8d14c378af833       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   8bc1c860c254b       kube-controller-manager-newest-cni-783113   kube-system
	49ba61bc9e3df       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   126659a5dca33       kube-apiserver-newest-cni-783113            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-783113
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-783113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=newest-cni-783113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_01_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:01:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-783113
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:01:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:01:20 +0000   Sat, 15 Nov 2025 10:01:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:01:20 +0000   Sat, 15 Nov 2025 10:01:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:01:20 +0000   Sat, 15 Nov 2025 10:01:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 15 Nov 2025 10:01:20 +0000   Sat, 15 Nov 2025 10:01:16 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-783113
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                d180c89e-341a-4dbc-bc47-54c5b0042756
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-783113                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-zjdf2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-783113             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-783113    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-bqp7j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-783113             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 14s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-783113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-783113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-783113 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet          Node newest-cni-783113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet          Node newest-cni-783113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet          Node newest-cni-783113 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-783113 event: Registered Node newest-cni-783113 in Controller
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [b1ae82fede670a458a71a0cc0aceedfa4e59d183451acfd2d8d142c7c7d09517] <==
	{"level":"warn","ts":"2025-11-15T10:01:16.586759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.600980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.609826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.617847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.625892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.634094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.642663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.651671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.660282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.669618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.682277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.695627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.707664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.719036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.728736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.738226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.749861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.759573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.768013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.777280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.794899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.800799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.808025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.818088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:16.883908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39718","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:01:27 up  1:43,  0 user,  load average: 6.34, 3.33, 2.05
	Linux newest-cni-783113 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [568789f99c380387f6cd3400d6f8b56482e7b523df4e19e965d62dc945250654] <==
	I1115 10:01:26.073327       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:01:26.085734       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1115 10:01:26.085919       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:01:26.085938       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:01:26.085966       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:01:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:01:26.302658       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:01:26.302684       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:01:26.302700       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:01:26.302849       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:01:26.671709       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:01:26.671751       1 metrics.go:72] Registering metrics
	I1115 10:01:26.671883       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [49ba61bc9e3df910e9aa09ca9aa0e81d47bda3f3192c73cd59d2beeb19cdb4ce] <==
	I1115 10:01:17.412274       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 10:01:17.412298       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1115 10:01:17.412845       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:01:17.416130       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 10:01:17.417487       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:01:17.421606       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:01:17.422754       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:01:17.454063       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:01:18.315581       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 10:01:18.320684       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 10:01:18.320705       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:01:18.794692       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:01:18.836962       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:01:18.920680       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 10:01:18.927330       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1115 10:01:18.928754       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:01:18.932975       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:01:19.329267       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:01:20.141153       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:01:20.149931       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 10:01:20.159369       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:01:25.252507       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:01:25.262027       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:01:25.387315       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:01:25.435642       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8d14c378af83398c05aa4e59422468213abedba3682a008350154cfb4c5f95a2] <==
	I1115 10:01:24.328779       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:01:24.329115       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:01:24.329230       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:01:24.329414       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:01:24.329721       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:01:24.329768       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1115 10:01:24.329843       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:01:24.329985       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:01:24.330056       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:01:24.330081       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:01:24.330162       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:01:24.330170       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:01:24.330197       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:01:24.330168       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:01:24.330201       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:01:24.331432       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:01:24.331558       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:01:24.331786       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:01:24.331910       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:01:24.331990       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-783113"
	I1115 10:01:24.332042       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 10:01:24.338279       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:01:24.341555       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:01:24.346764       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:01:24.357269       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f64515d47cc5c58b95599c709ae497520d79fb3662641c4ff305bf0358377c6f] <==
	I1115 10:01:25.895258       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:01:25.965342       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:01:26.065498       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:01:26.065553       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1115 10:01:26.065657       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:01:26.107055       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:01:26.107116       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:01:26.113556       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:01:26.113925       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:01:26.114009       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:01:26.115454       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:01:26.115494       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:01:26.115549       1 config.go:200] "Starting service config controller"
	I1115 10:01:26.115560       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:01:26.115582       1 config.go:309] "Starting node config controller"
	I1115 10:01:26.115581       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:01:26.115587       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:01:26.115593       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:01:26.215721       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:01:26.215750       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:01:26.215720       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:01:26.215869       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c93be9d226d659117c8d64963b7ccc492ea8c7f069ffbb6ef82c1d1ae550fb0b] <==
	E1115 10:01:17.372571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:01:17.372631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:01:17.372711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:01:17.372876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:01:17.372880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:01:17.372922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:01:17.373083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:01:17.373183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:01:17.373851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:01:18.178360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:01:18.178361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:01:18.264289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:01:18.305699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:01:18.365383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 10:01:18.383855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:01:18.390857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:01:18.418995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:01:18.420875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:01:18.466143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:01:18.500436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:01:18.535591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:01:18.552883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:01:18.555918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:01:18.598600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1115 10:01:21.067498       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:01:20 newest-cni-783113 kubelet[1331]: I1115 10:01:20.961015    1331 apiserver.go:52] "Watching apiserver"
	Nov 15 10:01:20 newest-cni-783113 kubelet[1331]: I1115 10:01:20.965276    1331 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 15 10:01:21 newest-cni-783113 kubelet[1331]: I1115 10:01:21.012713    1331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-783113"
	Nov 15 10:01:21 newest-cni-783113 kubelet[1331]: I1115 10:01:21.012904    1331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-783113"
	Nov 15 10:01:21 newest-cni-783113 kubelet[1331]: I1115 10:01:21.013006    1331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-783113"
	Nov 15 10:01:21 newest-cni-783113 kubelet[1331]: I1115 10:01:21.013138    1331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-783113"
	Nov 15 10:01:21 newest-cni-783113 kubelet[1331]: E1115 10:01:21.020278    1331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-783113\" already exists" pod="kube-system/kube-scheduler-newest-cni-783113"
	Nov 15 10:01:21 newest-cni-783113 kubelet[1331]: E1115 10:01:21.021863    1331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-783113\" already exists" pod="kube-system/kube-controller-manager-newest-cni-783113"
	Nov 15 10:01:21 newest-cni-783113 kubelet[1331]: E1115 10:01:21.022104    1331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-783113\" already exists" pod="kube-system/etcd-newest-cni-783113"
	Nov 15 10:01:21 newest-cni-783113 kubelet[1331]: E1115 10:01:21.022241    1331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-783113\" already exists" pod="kube-system/kube-apiserver-newest-cni-783113"
	Nov 15 10:01:21 newest-cni-783113 kubelet[1331]: I1115 10:01:21.042948    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-783113" podStartSLOduration=1.042927926 podStartE2EDuration="1.042927926s" podCreationTimestamp="2025-11-15 10:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:01:21.042624259 +0000 UTC m=+1.148341465" watchObservedRunningTime="2025-11-15 10:01:21.042927926 +0000 UTC m=+1.148645127"
	Nov 15 10:01:21 newest-cni-783113 kubelet[1331]: I1115 10:01:21.083416    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-783113" podStartSLOduration=1.083368398 podStartE2EDuration="1.083368398s" podCreationTimestamp="2025-11-15 10:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:01:21.072798636 +0000 UTC m=+1.178515843" watchObservedRunningTime="2025-11-15 10:01:21.083368398 +0000 UTC m=+1.189085596"
	Nov 15 10:01:21 newest-cni-783113 kubelet[1331]: I1115 10:01:21.083628    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-783113" podStartSLOduration=2.083615115 podStartE2EDuration="2.083615115s" podCreationTimestamp="2025-11-15 10:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:01:21.083071936 +0000 UTC m=+1.188789142" watchObservedRunningTime="2025-11-15 10:01:21.083615115 +0000 UTC m=+1.189332301"
	Nov 15 10:01:21 newest-cni-783113 kubelet[1331]: I1115 10:01:21.102651    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-783113" podStartSLOduration=1.102630983 podStartE2EDuration="1.102630983s" podCreationTimestamp="2025-11-15 10:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:01:21.09261974 +0000 UTC m=+1.198336946" watchObservedRunningTime="2025-11-15 10:01:21.102630983 +0000 UTC m=+1.208348189"
	Nov 15 10:01:24 newest-cni-783113 kubelet[1331]: I1115 10:01:24.313635    1331 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 15 10:01:24 newest-cni-783113 kubelet[1331]: I1115 10:01:24.315222    1331 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 15 10:01:25 newest-cni-783113 kubelet[1331]: I1115 10:01:25.507030    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19ca680a-9bd3-4943-842b-7ef042aa6e0e-xtables-lock\") pod \"kube-proxy-bqp7j\" (UID: \"19ca680a-9bd3-4943-842b-7ef042aa6e0e\") " pod="kube-system/kube-proxy-bqp7j"
	Nov 15 10:01:25 newest-cni-783113 kubelet[1331]: I1115 10:01:25.507092    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/19ca680a-9bd3-4943-842b-7ef042aa6e0e-kube-proxy\") pod \"kube-proxy-bqp7j\" (UID: \"19ca680a-9bd3-4943-842b-7ef042aa6e0e\") " pod="kube-system/kube-proxy-bqp7j"
	Nov 15 10:01:25 newest-cni-783113 kubelet[1331]: I1115 10:01:25.507134    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f7a3d406-4576-45ea-a09e-00df6579f9df-cni-cfg\") pod \"kindnet-zjdf2\" (UID: \"f7a3d406-4576-45ea-a09e-00df6579f9df\") " pod="kube-system/kindnet-zjdf2"
	Nov 15 10:01:25 newest-cni-783113 kubelet[1331]: I1115 10:01:25.507159    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb9rj\" (UniqueName: \"kubernetes.io/projected/f7a3d406-4576-45ea-a09e-00df6579f9df-kube-api-access-xb9rj\") pod \"kindnet-zjdf2\" (UID: \"f7a3d406-4576-45ea-a09e-00df6579f9df\") " pod="kube-system/kindnet-zjdf2"
	Nov 15 10:01:25 newest-cni-783113 kubelet[1331]: I1115 10:01:25.507203    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19ca680a-9bd3-4943-842b-7ef042aa6e0e-lib-modules\") pod \"kube-proxy-bqp7j\" (UID: \"19ca680a-9bd3-4943-842b-7ef042aa6e0e\") " pod="kube-system/kube-proxy-bqp7j"
	Nov 15 10:01:25 newest-cni-783113 kubelet[1331]: I1115 10:01:25.507231    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4m2n\" (UniqueName: \"kubernetes.io/projected/19ca680a-9bd3-4943-842b-7ef042aa6e0e-kube-api-access-b4m2n\") pod \"kube-proxy-bqp7j\" (UID: \"19ca680a-9bd3-4943-842b-7ef042aa6e0e\") " pod="kube-system/kube-proxy-bqp7j"
	Nov 15 10:01:25 newest-cni-783113 kubelet[1331]: I1115 10:01:25.507255    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7a3d406-4576-45ea-a09e-00df6579f9df-xtables-lock\") pod \"kindnet-zjdf2\" (UID: \"f7a3d406-4576-45ea-a09e-00df6579f9df\") " pod="kube-system/kindnet-zjdf2"
	Nov 15 10:01:25 newest-cni-783113 kubelet[1331]: I1115 10:01:25.507277    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7a3d406-4576-45ea-a09e-00df6579f9df-lib-modules\") pod \"kindnet-zjdf2\" (UID: \"f7a3d406-4576-45ea-a09e-00df6579f9df\") " pod="kube-system/kindnet-zjdf2"
	Nov 15 10:01:26 newest-cni-783113 kubelet[1331]: I1115 10:01:26.037600    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zjdf2" podStartSLOduration=1.037574954 podStartE2EDuration="1.037574954s" podCreationTimestamp="2025-11-15 10:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:01:26.037566822 +0000 UTC m=+6.143284030" watchObservedRunningTime="2025-11-15 10:01:26.037574954 +0000 UTC m=+6.143292162"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-783113 -n newest-cni-783113
E1115 10:01:27.820125  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-783113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-87x7w storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-783113 describe pod coredns-66bc5c9577-87x7w storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-783113 describe pod coredns-66bc5c9577-87x7w storage-provisioner: exit status 1 (70.010331ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-87x7w" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-783113 describe pod coredns-66bc5c9577-87x7w storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-679865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-679865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (306.847261ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:01:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-679865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-679865 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-679865 describe deploy/metrics-server -n kube-system: exit status 1 (74.662127ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-679865 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-679865
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-679865:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2",
	        "Created": "2025-11-15T10:00:47.592632721Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 614254,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:00:47.637038153Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2/hosts",
	        "LogPath": "/var/lib/docker/containers/0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2/0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2-json.log",
	        "Name": "/default-k8s-diff-port-679865",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-679865:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-679865",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2",
	                "LowerDir": "/var/lib/docker/overlay2/c9a47e17df51e0706eb06fed8bfcae68caad912487e3e04528cdc868dad95f4e-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9a47e17df51e0706eb06fed8bfcae68caad912487e3e04528cdc868dad95f4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9a47e17df51e0706eb06fed8bfcae68caad912487e3e04528cdc868dad95f4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9a47e17df51e0706eb06fed8bfcae68caad912487e3e04528cdc868dad95f4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-679865",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-679865/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-679865",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-679865",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-679865",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3bd2d2f90a74fc9673c0a1e258dbd2928e6eedaeca827e375481370453b64faf",
	            "SandboxKey": "/var/run/docker/netns/3bd2d2f90a74",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-679865": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0a7ab291fd7d7a6f03caec52507c3e2e0702cb6e9e4295365d7aba23864f9771",
	                    "EndpointID": "3d635cb8669cf94380e961910473fab5ad0b632840a63b0ca891ad6244f5264c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "7e:18:80:72:32:e7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-679865",
	                        "0b40f9321403"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-679865 -n default-k8s-diff-port-679865
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-679865 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-679865 logs -n 25: (1.260360162s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-405833    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ start   │ -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-405833    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p kubernetes-upgrade-405833                                                                                                                                                                                                                  │ kubernetes-upgrade-405833    │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p embed-certs-430513 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ image   │ old-k8s-version-335655 image list --format=json                                                                                                                                                                                               │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ pause   │ -p old-k8s-version-335655 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ delete  │ -p old-k8s-version-335655                                                                                                                                                                                                                     │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p old-k8s-version-335655                                                                                                                                                                                                                     │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p disable-driver-mounts-553319                                                                                                                                                                                                               │ disable-driver-mounts-553319 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p default-k8s-diff-port-679865 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:01 UTC │
	│ image   │ no-preload-559401 image list --format=json                                                                                                                                                                                                    │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ pause   │ -p no-preload-559401 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ delete  │ -p no-preload-559401                                                                                                                                                                                                                          │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p no-preload-559401                                                                                                                                                                                                                          │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p newest-cni-783113 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:01 UTC │
	│ start   │ -p cert-expiration-341243 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-341243       │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-430513 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ stop    │ -p embed-certs-430513 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ delete  │ -p cert-expiration-341243                                                                                                                                                                                                                     │ cert-expiration-341243       │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ start   │ -p auto-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-430513 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ start   │ -p embed-certs-430513 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-783113 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ stop    │ -p newest-cni-783113 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-679865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:01:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:01:25.112628  625726 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:01:25.112973  625726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:01:25.112986  625726 out.go:374] Setting ErrFile to fd 2...
	I1115 10:01:25.112993  625726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:01:25.113335  625726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:01:25.113943  625726 out.go:368] Setting JSON to false
	I1115 10:01:25.115245  625726 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6226,"bootTime":1763194659,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:01:25.115342  625726 start.go:143] virtualization: kvm guest
	I1115 10:01:25.117498  625726 out.go:179] * [embed-certs-430513] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:01:25.119255  625726 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:01:25.119290  625726 notify.go:221] Checking for updates...
	I1115 10:01:25.122167  625726 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:01:25.124001  625726 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:01:25.125177  625726 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 10:01:25.126201  625726 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:01:25.127311  625726 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:01:25.095647  617563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:25.183365  617563 kubeadm.go:1114] duration metric: took 4.189000358s to wait for elevateKubeSystemPrivileges
	I1115 10:01:25.183421  617563 kubeadm.go:403] duration metric: took 18.776487561s to StartCluster
	I1115 10:01:25.183484  617563 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:25.183665  617563 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:01:25.185821  617563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:25.186102  617563 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:01:25.186132  617563 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:01:25.186258  617563 config.go:182] Loaded profile config "newest-cni-783113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:25.186211  617563 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:01:25.186493  617563 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-783113"
	I1115 10:01:25.186503  617563 addons.go:70] Setting default-storageclass=true in profile "newest-cni-783113"
	I1115 10:01:25.186520  617563 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-783113"
	I1115 10:01:25.186524  617563 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-783113"
	I1115 10:01:25.186578  617563 host.go:66] Checking if "newest-cni-783113" exists ...
	I1115 10:01:25.186928  617563 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:25.187416  617563 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:25.194611  617563 out.go:179] * Verifying Kubernetes components...
	I1115 10:01:25.196156  617563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:25.218378  617563 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:01:25.128883  625726 config.go:182] Loaded profile config "embed-certs-430513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:25.129500  625726 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:01:25.157365  625726 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:01:25.157485  625726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:01:25.256739  625726 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:01:25.227745081 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:01:25.256924  625726 docker.go:319] overlay module found
	I1115 10:01:25.259326  625726 out.go:179] * Using the docker driver based on existing profile
	I1115 10:01:25.260508  625726 start.go:309] selected driver: docker
	I1115 10:01:25.260541  625726 start.go:930] validating driver "docker" against &{Name:embed-certs-430513 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:01:25.260671  625726 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:01:25.261609  625726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:01:25.351025  625726 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:01:25.337708964 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:01:25.351357  625726 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:01:25.351469  625726 cni.go:84] Creating CNI manager for ""
	I1115 10:01:25.351573  625726 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:01:25.351654  625726 start.go:353] cluster config:
	{Name:embed-certs-430513 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:01:25.353940  625726 out.go:179] * Starting "embed-certs-430513" primary control-plane node in "embed-certs-430513" cluster
	I1115 10:01:25.355866  625726 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:01:25.357125  625726 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:01:25.358288  625726 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:01:25.358326  625726 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:01:25.358361  625726 cache.go:65] Caching tarball of preloaded images
	I1115 10:01:25.358367  625726 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:01:25.358518  625726 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:01:25.358536  625726 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:01:25.358681  625726 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/config.json ...
	I1115 10:01:25.385513  625726 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:01:25.385538  625726 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:01:25.385557  625726 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:01:25.385594  625726 start.go:360] acquireMachinesLock for embed-certs-430513: {Name:mk23e9dcdc23745b328473e6d9e82c519bc86048 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:01:25.385659  625726 start.go:364] duration metric: took 40.262µs to acquireMachinesLock for "embed-certs-430513"
	I1115 10:01:25.385682  625726 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:01:25.385689  625726 fix.go:54] fixHost starting: 
	I1115 10:01:25.385973  625726 cli_runner.go:164] Run: docker container inspect embed-certs-430513 --format={{.State.Status}}
	I1115 10:01:25.409937  625726 fix.go:112] recreateIfNeeded on embed-certs-430513: state=Stopped err=<nil>
	W1115 10:01:25.409975  625726 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:01:25.220283  617563 addons.go:239] Setting addon default-storageclass=true in "newest-cni-783113"
	I1115 10:01:25.220302  617563 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:01:25.220320  617563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:01:25.220332  617563 host.go:66] Checking if "newest-cni-783113" exists ...
	I1115 10:01:25.220379  617563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:25.220854  617563 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:25.255599  617563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:25.258165  617563 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:01:25.258185  617563 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:01:25.258255  617563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:25.290346  617563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:25.318989  617563 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:01:25.373946  617563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:01:25.383819  617563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:01:25.414962  617563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:01:25.540082  617563 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1115 10:01:25.543281  617563 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:01:25.544097  617563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:01:25.766322  617563 api_server.go:72] duration metric: took 580.15746ms to wait for apiserver process to appear ...
	I1115 10:01:25.766352  617563 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:01:25.766374  617563 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:01:25.773085  617563 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:01:25.774174  617563 api_server.go:141] control plane version: v1.34.1
	I1115 10:01:25.774204  617563 api_server.go:131] duration metric: took 7.844461ms to wait for apiserver health ...
	I1115 10:01:25.774215  617563 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:01:25.778999  617563 system_pods.go:59] 8 kube-system pods found
	I1115 10:01:25.779046  617563 system_pods.go:61] "coredns-66bc5c9577-87x7w" [3f2d84f5-7f97-4a19-b552-0575a9ceb536] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:01:25.779064  617563 system_pods.go:61] "etcd-newest-cni-783113" [2ea0aa42-7852-499c-8e8e-c5e1cfeb5707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:01:25.779076  617563 system_pods.go:61] "kindnet-zjdf2" [f7a3d406-4576-45ea-a09e-00df6579f9df] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1115 10:01:25.779098  617563 system_pods.go:61] "kube-apiserver-newest-cni-783113" [2313995d-c79b-4e18-8b97-3463f3d95a8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:01:25.779107  617563 system_pods.go:61] "kube-controller-manager-newest-cni-783113" [d3439ed1-3ef3-4865-9ff8-42c82ac3cfc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:01:25.779114  617563 system_pods.go:61] "kube-proxy-bqp7j" [19ca680a-9bd3-4943-842b-7ef042aa6e0e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1115 10:01:25.779122  617563 system_pods.go:61] "kube-scheduler-newest-cni-783113" [8feea409-ed92-4a4d-8df7-39898903b818] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:01:25.779128  617563 system_pods.go:61] "storage-provisioner" [830eb5ed-8939-4ca1-a08d-440456d95a53] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:01:25.779137  617563 system_pods.go:74] duration metric: took 4.91397ms to wait for pod list to return data ...
	I1115 10:01:25.779149  617563 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:01:25.779876  617563 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:01:25.783886  617563 addons.go:515] duration metric: took 597.669269ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:01:25.787166  617563 default_sa.go:45] found service account: "default"
	I1115 10:01:25.787196  617563 default_sa.go:55] duration metric: took 8.038595ms for default service account to be created ...
	I1115 10:01:25.787211  617563 kubeadm.go:587] duration metric: took 601.051465ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:01:25.787254  617563 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:01:25.794786  617563 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:01:25.794835  617563 node_conditions.go:123] node cpu capacity is 8
	I1115 10:01:25.794854  617563 node_conditions.go:105] duration metric: took 7.593446ms to run NodePressure ...
	I1115 10:01:25.794870  617563 start.go:242] waiting for startup goroutines ...
	I1115 10:01:26.045286  617563 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-783113" context rescaled to 1 replicas
	I1115 10:01:26.045331  617563 start.go:247] waiting for cluster config update ...
	I1115 10:01:26.045345  617563 start.go:256] writing updated cluster config ...
	I1115 10:01:26.045751  617563 ssh_runner.go:195] Run: rm -f paused
	I1115 10:01:26.121001  617563 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:01:26.121967  617563 out.go:179] * Done! kubectl is now configured to use "newest-cni-783113" cluster and "default" namespace by default
	I1115 10:01:21.880666  622837 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:01:21.942165  622837 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:01:21.942278  622837 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-034018 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1115 10:01:22.213796  622837 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:01:22.213967  622837 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-034018 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1115 10:01:22.585970  622837 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:01:23.445050  622837 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:01:23.916320  622837 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:01:23.916474  622837 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:01:24.374123  622837 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:01:24.850628  622837 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:01:25.108781  622837 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:01:25.963201  622837 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:01:26.601494  622837 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:01:26.602035  622837 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:01:26.605917  622837 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:01:26.607519  622837 out.go:252]   - Booting up control plane ...
	I1115 10:01:26.607628  622837 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:01:26.607713  622837 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:01:26.608380  622837 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:01:26.622423  622837 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:01:26.622603  622837 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:01:26.629865  622837 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:01:26.630187  622837 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:01:26.630250  622837 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:01:25.411708  625726 out.go:252] * Restarting existing docker container for "embed-certs-430513" ...
	I1115 10:01:25.411791  625726 cli_runner.go:164] Run: docker start embed-certs-430513
	I1115 10:01:25.750865  625726 cli_runner.go:164] Run: docker container inspect embed-certs-430513 --format={{.State.Status}}
	I1115 10:01:25.775576  625726 kic.go:430] container "embed-certs-430513" state is running.
	I1115 10:01:25.776288  625726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-430513
	I1115 10:01:25.809129  625726 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/config.json ...
	I1115 10:01:25.809411  625726 machine.go:94] provisionDockerMachine start ...
	I1115 10:01:25.809502  625726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:01:25.835705  625726 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:25.836180  625726 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1115 10:01:25.836248  625726 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:01:25.836919  625726 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48806->127.0.0.1:33469: read: connection reset by peer
	I1115 10:01:28.977772  625726 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-430513
	
	I1115 10:01:28.977819  625726 ubuntu.go:182] provisioning hostname "embed-certs-430513"
	I1115 10:01:28.977894  625726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:01:29.002376  625726 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:29.002890  625726 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1115 10:01:29.002909  625726 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-430513 && echo "embed-certs-430513" | sudo tee /etc/hostname
	I1115 10:01:29.152902  625726 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-430513
	
	I1115 10:01:29.153017  625726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:01:29.179426  625726 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:29.179742  625726 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1115 10:01:29.179767  625726 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-430513' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-430513/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-430513' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:01:29.309123  625726 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:01:29.309159  625726 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 10:01:29.309180  625726 ubuntu.go:190] setting up certificates
	I1115 10:01:29.309192  625726 provision.go:84] configureAuth start
	I1115 10:01:29.309290  625726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-430513
	I1115 10:01:29.331498  625726 provision.go:143] copyHostCerts
	I1115 10:01:29.331579  625726 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 10:01:29.331601  625726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 10:01:29.331682  625726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 10:01:29.331775  625726 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 10:01:29.331784  625726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 10:01:29.331814  625726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 10:01:29.331864  625726 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 10:01:29.331872  625726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 10:01:29.331895  625726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 10:01:29.331947  625726 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.embed-certs-430513 san=[127.0.0.1 192.168.76.2 embed-certs-430513 localhost minikube]
	I1115 10:01:29.601554  625726 provision.go:177] copyRemoteCerts
	I1115 10:01:29.601623  625726 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:01:29.601672  625726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:01:29.622479  625726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:01:29.729633  625726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:01:29.751118  625726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:01:29.774329  625726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1115 10:01:29.794591  625726 provision.go:87] duration metric: took 485.385617ms to configureAuth
	I1115 10:01:29.794621  625726 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:01:29.794820  625726 config.go:182] Loaded profile config "embed-certs-430513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:29.794938  625726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:01:29.816671  625726 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:29.816928  625726 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I1115 10:01:29.816954  625726 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:01:26.740101  622837 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:01:26.740261  622837 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:01:27.241083  622837 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.995548ms
	I1115 10:01:27.245160  622837 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:01:27.245304  622837 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1115 10:01:27.245447  622837 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:01:27.245558  622837 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:01:29.138550  622837 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.893319735s
	I1115 10:01:29.670737  622837 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.425514338s
	I1115 10:01:31.247430  622837 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002245555s
	I1115 10:01:31.259287  622837 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:01:31.273014  622837 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:01:31.285165  622837 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:01:31.285492  622837 kubeadm.go:319] [mark-control-plane] Marking the node auto-034018 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:01:31.296009  622837 kubeadm.go:319] [bootstrap-token] Using token: 9beitq.uiqmo0stovjywkd7
	I1115 10:01:30.114139  625726 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:01:30.114168  625726 machine.go:97] duration metric: took 4.304743058s to provisionDockerMachine
	I1115 10:01:30.114184  625726 start.go:293] postStartSetup for "embed-certs-430513" (driver="docker")
	I1115 10:01:30.114198  625726 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:01:30.114275  625726 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:01:30.114334  625726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:01:30.134172  625726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:01:30.229029  625726 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:01:30.232622  625726 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:01:30.232648  625726 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:01:30.232659  625726 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 10:01:30.232704  625726 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 10:01:30.232784  625726 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 10:01:30.232878  625726 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:01:30.240902  625726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:01:30.259437  625726 start.go:296] duration metric: took 145.235947ms for postStartSetup
	I1115 10:01:30.259525  625726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:01:30.259594  625726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:01:30.277656  625726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:01:30.370743  625726 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:01:30.375710  625726 fix.go:56] duration metric: took 4.990004396s for fixHost
	I1115 10:01:30.375741  625726 start.go:83] releasing machines lock for "embed-certs-430513", held for 4.99006864s
	I1115 10:01:30.375806  625726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-430513
	I1115 10:01:30.393823  625726 ssh_runner.go:195] Run: cat /version.json
	I1115 10:01:30.393857  625726 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:01:30.393874  625726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:01:30.393930  625726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:01:30.412479  625726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:01:30.412589  625726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:01:30.569588  625726 ssh_runner.go:195] Run: systemctl --version
	I1115 10:01:30.577452  625726 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:01:30.619654  625726 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:01:30.625544  625726 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:01:30.625614  625726 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:01:30.635734  625726 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:01:30.635771  625726 start.go:496] detecting cgroup driver to use...
	I1115 10:01:30.635810  625726 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 10:01:30.635896  625726 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:01:30.653741  625726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:01:30.670998  625726 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:01:30.671059  625726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:01:30.689224  625726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:01:30.704760  625726 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:01:30.803247  625726 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:01:30.908651  625726 docker.go:234] disabling docker service ...
	I1115 10:01:30.908733  625726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:01:30.926137  625726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:01:30.943089  625726 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:01:31.048221  625726 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:01:31.144531  625726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:01:31.158033  625726 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:01:31.173671  625726 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:01:31.173758  625726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:31.183435  625726 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 10:01:31.183532  625726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:31.193116  625726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:31.203345  625726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:31.213074  625726 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:01:31.223248  625726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:31.234169  625726 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:31.243724  625726 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:31.254725  625726 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:01:31.264556  625726 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:01:31.274549  625726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:31.365156  625726 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:01:31.477296  625726 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:01:31.477381  625726 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:01:31.481282  625726 start.go:564] Will wait 60s for crictl version
	I1115 10:01:31.481333  625726 ssh_runner.go:195] Run: which crictl
	I1115 10:01:31.484844  625726 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:01:31.509675  625726 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:01:31.509755  625726 ssh_runner.go:195] Run: crio --version
	I1115 10:01:31.537490  625726 ssh_runner.go:195] Run: crio --version
	I1115 10:01:31.572450  625726 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:01:31.299184  622837 out.go:252]   - Configuring RBAC rules ...
	I1115 10:01:31.299355  622837 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:01:31.301614  622837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:01:31.315415  622837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:01:31.319414  622837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:01:31.326036  622837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:01:31.329139  622837 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:01:31.654213  622837 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:01:32.069254  622837 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:01:32.655427  622837 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:01:32.656628  622837 kubeadm.go:319] 
	I1115 10:01:32.656757  622837 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:01:32.656769  622837 kubeadm.go:319] 
	I1115 10:01:32.656878  622837 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:01:32.656900  622837 kubeadm.go:319] 
	I1115 10:01:32.656930  622837 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:01:32.657002  622837 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:01:32.657064  622837 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:01:32.657073  622837 kubeadm.go:319] 
	I1115 10:01:32.657136  622837 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:01:32.657144  622837 kubeadm.go:319] 
	I1115 10:01:32.657200  622837 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:01:32.657207  622837 kubeadm.go:319] 
	I1115 10:01:32.657267  622837 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:01:32.657359  622837 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:01:32.657470  622837 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:01:32.657480  622837 kubeadm.go:319] 
	I1115 10:01:32.657592  622837 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:01:32.657684  622837 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:01:32.657692  622837 kubeadm.go:319] 
	I1115 10:01:32.657806  622837 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9beitq.uiqmo0stovjywkd7 \
	I1115 10:01:32.657927  622837 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac \
	I1115 10:01:32.657954  622837 kubeadm.go:319] 	--control-plane 
	I1115 10:01:32.657962  622837 kubeadm.go:319] 
	I1115 10:01:32.658059  622837 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:01:32.658069  622837 kubeadm.go:319] 
	I1115 10:01:32.658164  622837 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9beitq.uiqmo0stovjywkd7 \
	I1115 10:01:32.658285  622837 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac 
	I1115 10:01:32.661509  622837 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:01:32.661661  622837 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:01:32.661692  622837 cni.go:84] Creating CNI manager for ""
	I1115 10:01:32.661723  622837 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:01:32.664114  622837 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1115 10:01:31.573921  625726 cli_runner.go:164] Run: docker network inspect embed-certs-430513 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:01:31.593731  625726 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:01:31.598929  625726 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:01:31.611437  625726 kubeadm.go:884] updating cluster {Name:embed-certs-430513 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:01:31.611623  625726 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:01:31.611665  625726 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:01:31.647658  625726 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:01:31.647681  625726 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:01:31.647738  625726 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:01:31.678045  625726 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:01:31.678067  625726 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:01:31.678074  625726 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:01:31.678172  625726 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-430513 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:01:31.678246  625726 ssh_runner.go:195] Run: crio config
	I1115 10:01:31.732633  625726 cni.go:84] Creating CNI manager for ""
	I1115 10:01:31.732659  625726 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:01:31.732680  625726 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:01:31.732716  625726 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-430513 NodeName:embed-certs-430513 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:01:31.732882  625726 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-430513"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:01:31.732959  625726 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:01:31.741987  625726 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:01:31.742067  625726 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:01:31.750605  625726 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:01:31.764940  625726 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:01:31.778133  625726 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1115 10:01:31.790990  625726 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:01:31.794721  625726 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:01:31.805240  625726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:31.897120  625726 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:01:31.927795  625726 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513 for IP: 192.168.76.2
	I1115 10:01:31.927825  625726 certs.go:195] generating shared ca certs ...
	I1115 10:01:31.927849  625726 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:31.928047  625726 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 10:01:31.928125  625726 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 10:01:31.928140  625726 certs.go:257] generating profile certs ...
	I1115 10:01:31.928263  625726 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/client.key
	I1115 10:01:31.928331  625726 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.key.866022bc
	I1115 10:01:31.928423  625726 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.key
	I1115 10:01:31.928603  625726 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 10:01:31.928647  625726 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 10:01:31.928663  625726 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:01:31.928693  625726 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:01:31.928730  625726 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:01:31.928761  625726 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 10:01:31.928824  625726 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:01:31.930860  625726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:01:31.952571  625726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:01:31.971917  625726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:01:31.991262  625726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:01:32.019009  625726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1115 10:01:32.041544  625726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:01:32.061685  625726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:01:32.081485  625726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/embed-certs-430513/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:01:32.100359  625726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:01:32.119166  625726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 10:01:32.138006  625726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 10:01:32.157406  625726 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:01:32.170216  625726 ssh_runner.go:195] Run: openssl version
	I1115 10:01:32.176511  625726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:01:32.184951  625726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:01:32.188659  625726 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:01:32.188707  625726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:01:32.225103  625726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:01:32.233684  625726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 10:01:32.242345  625726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 10:01:32.246297  625726 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 10:01:32.246349  625726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 10:01:32.284582  625726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 10:01:32.294615  625726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 10:01:32.304212  625726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 10:01:32.308176  625726 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 10:01:32.308243  625726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 10:01:32.345309  625726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:01:32.354576  625726 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:01:32.358899  625726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:01:32.394241  625726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:01:32.430430  625726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:01:32.475512  625726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:01:32.524965  625726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:01:32.575994  625726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:01:32.640685  625726 kubeadm.go:401] StartCluster: {Name:embed-certs-430513 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-430513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:01:32.640810  625726 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:01:32.640873  625726 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:01:32.676675  625726 cri.go:89] found id: "7884e9381d1df9759c7a3893af1cf75c8acb92edff2489e9e07e1d1d4102b7df"
	I1115 10:01:32.676703  625726 cri.go:89] found id: "5fecf1854c34c29514b1ec6c6221755aeaa0b46dbd1e7d27edaf9fa5c71f7871"
	I1115 10:01:32.676708  625726 cri.go:89] found id: "edbf223b01e791d146a5f2ad465d24c0a6d60f196e80f447883f5851e9f2a5af"
	I1115 10:01:32.676713  625726 cri.go:89] found id: "aa074b22936792966ead83faadae096faa591efe77ef77f4c0e0ec3344f4e2e9"
	I1115 10:01:32.676717  625726 cri.go:89] found id: ""
	I1115 10:01:32.676763  625726 ssh_runner.go:195] Run: sudo runc list -f json
	W1115 10:01:32.690562  625726 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:01:32Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:01:32.690650  625726 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:01:32.699625  625726 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:01:32.699650  625726 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:01:32.699698  625726 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:01:32.708890  625726 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:01:32.709613  625726 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-430513" does not appear in /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:01:32.710142  625726 kubeconfig.go:62] /home/jenkins/minikube-integration/21895-355485/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-430513" cluster setting kubeconfig missing "embed-certs-430513" context setting]
	I1115 10:01:32.711003  625726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:32.713057  625726 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:01:32.721992  625726 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1115 10:01:32.722025  625726 kubeadm.go:602] duration metric: took 22.368885ms to restartPrimaryControlPlane
	I1115 10:01:32.722035  625726 kubeadm.go:403] duration metric: took 81.367356ms to StartCluster
	I1115 10:01:32.722053  625726 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:32.722119  625726 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:01:32.724128  625726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:32.724437  625726 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:01:32.724511  625726 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:01:32.724626  625726 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-430513"
	I1115 10:01:32.724647  625726 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-430513"
	W1115 10:01:32.724655  625726 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:01:32.724685  625726 host.go:66] Checking if "embed-certs-430513" exists ...
	I1115 10:01:32.724688  625726 addons.go:70] Setting default-storageclass=true in profile "embed-certs-430513"
	I1115 10:01:32.724709  625726 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-430513"
	I1115 10:01:32.724711  625726 config.go:182] Loaded profile config "embed-certs-430513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:32.725081  625726 cli_runner.go:164] Run: docker container inspect embed-certs-430513 --format={{.State.Status}}
	I1115 10:01:32.725201  625726 addons.go:70] Setting dashboard=true in profile "embed-certs-430513"
	I1115 10:01:32.725233  625726 addons.go:239] Setting addon dashboard=true in "embed-certs-430513"
	W1115 10:01:32.725243  625726 addons.go:248] addon dashboard should already be in state true
	I1115 10:01:32.725263  625726 cli_runner.go:164] Run: docker container inspect embed-certs-430513 --format={{.State.Status}}
	I1115 10:01:32.725273  625726 host.go:66] Checking if "embed-certs-430513" exists ...
	I1115 10:01:32.725801  625726 cli_runner.go:164] Run: docker container inspect embed-certs-430513 --format={{.State.Status}}
	I1115 10:01:32.726271  625726 out.go:179] * Verifying Kubernetes components...
	I1115 10:01:32.729890  625726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:32.754337  625726 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:01:32.755656  625726 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:01:32.755675  625726 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:01:32.755734  625726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:01:32.755923  625726 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:01:32.757235  625726 addons.go:239] Setting addon default-storageclass=true in "embed-certs-430513"
	W1115 10:01:32.757265  625726 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:01:32.757295  625726 host.go:66] Checking if "embed-certs-430513" exists ...
	I1115 10:01:32.757239  625726 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Nov 15 10:01:22 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:22.205723097Z" level=info msg="Starting container: be422975111dc2fbd36b9a21d5cf8930e554fd13a87ff7caf9137707d065253c" id=6239aff8-b057-4b0c-b96e-05b5fa89108c name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:01:22 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:22.207890768Z" level=info msg="Started container" PID=1844 containerID=be422975111dc2fbd36b9a21d5cf8930e554fd13a87ff7caf9137707d065253c description=kube-system/coredns-66bc5c9577-wknnh/coredns id=6239aff8-b057-4b0c-b96e-05b5fa89108c name=/runtime.v1.RuntimeService/StartContainer sandboxID=2a4bd2342315f85dca7156de07c074a284091a205fecdbfda469877c82c3ac08
	Nov 15 10:01:24 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:24.771612988Z" level=info msg="Running pod sandbox: default/busybox/POD" id=77a653a0-4198-4e90-a40e-f62ca66695c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:01:24 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:24.771709055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:24 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:24.77810559Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2773256775c65ebe03accf4ba581c74c8295da5999dd9794c509b36a7e21db58 UID:cac86649-71f6-4c8c-b775-c310a8db63bc NetNS:/var/run/netns/17d21afc-f28e-4276-a4a7-cbc8c54cf162 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009805d8}] Aliases:map[]}"
	Nov 15 10:01:24 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:24.778134062Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 15 10:01:24 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:24.803885643Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2773256775c65ebe03accf4ba581c74c8295da5999dd9794c509b36a7e21db58 UID:cac86649-71f6-4c8c-b775-c310a8db63bc NetNS:/var/run/netns/17d21afc-f28e-4276-a4a7-cbc8c54cf162 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009805d8}] Aliases:map[]}"
	Nov 15 10:01:24 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:24.804112347Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 15 10:01:24 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:24.805191196Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 15 10:01:24 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:24.806295761Z" level=info msg="Ran pod sandbox 2773256775c65ebe03accf4ba581c74c8295da5999dd9794c509b36a7e21db58 with infra container: default/busybox/POD" id=77a653a0-4198-4e90-a40e-f62ca66695c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:01:24 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:24.807707452Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a9dfee7b-cf58-42d4-9b5a-3a676d82d2f6 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:24 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:24.807869613Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a9dfee7b-cf58-42d4-9b5a-3a676d82d2f6 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:24 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:24.807919292Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a9dfee7b-cf58-42d4-9b5a-3a676d82d2f6 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:24 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:24.808672724Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=16832eee-5661-40e9-b415-a6610cff12ce name=/runtime.v1.ImageService/PullImage
	Nov 15 10:01:24 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:24.811000413Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 15 10:01:27 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:27.061461508Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=16832eee-5661-40e9-b415-a6610cff12ce name=/runtime.v1.ImageService/PullImage
	Nov 15 10:01:27 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:27.062248506Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=80814493-bf1e-4b50-b80b-a63836695853 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:27 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:27.06379209Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=803e484e-297e-47b9-971f-db106322540a name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:27 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:27.067253358Z" level=info msg="Creating container: default/busybox/busybox" id=6ccd3441-384f-4159-b1f1-c493e9d59791 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:27 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:27.067421032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:27 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:27.070964875Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:27 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:27.071420691Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:27 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:27.111932034Z" level=info msg="Created container 3bbafd6f26ec5ceff3cbeec3d849a6e96caabe6a8bfac782991423d78b8104f5: default/busybox/busybox" id=6ccd3441-384f-4159-b1f1-c493e9d59791 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:27 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:27.112614388Z" level=info msg="Starting container: 3bbafd6f26ec5ceff3cbeec3d849a6e96caabe6a8bfac782991423d78b8104f5" id=5b8fbe8b-75e7-4cbf-a0ed-d902536619be name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:01:27 default-k8s-diff-port-679865 crio[776]: time="2025-11-15T10:01:27.11465094Z" level=info msg="Started container" PID=1915 containerID=3bbafd6f26ec5ceff3cbeec3d849a6e96caabe6a8bfac782991423d78b8104f5 description=default/busybox/busybox id=5b8fbe8b-75e7-4cbf-a0ed-d902536619be name=/runtime.v1.RuntimeService/StartContainer sandboxID=2773256775c65ebe03accf4ba581c74c8295da5999dd9794c509b36a7e21db58
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	3bbafd6f26ec5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   2773256775c65       busybox                                                default
	be422975111dc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   2a4bd2342315f       coredns-66bc5c9577-wknnh                               kube-system
	91ea7002c24de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   d4bcee50ffee3       storage-provisioner                                    kube-system
	55d76a3a8edf0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   9fa7f3fbdff5b       kindnet-7j4zt                                          kube-system
	ad22928ecc399       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   5ddd9cd3edbfd       kube-proxy-qhrzp                                       kube-system
	bccb7c6189b40       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   c220258531098       kube-controller-manager-default-k8s-diff-port-679865   kube-system
	39f659cf4042b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   819fa0a480f7b       etcd-default-k8s-diff-port-679865                      kube-system
	b9026d149a449       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   ee6a380fcf8d7       kube-apiserver-default-k8s-diff-port-679865            kube-system
	39c38fd744041       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   bf0b8114fbe0e       kube-scheduler-default-k8s-diff-port-679865            kube-system
	
	
	==> coredns [be422975111dc2fbd36b9a21d5cf8930e554fd13a87ff7caf9137707d065253c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37440 - 5117 "HINFO IN 5341840308894985642.398223180004208182. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.015311942s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-679865
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-679865
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=default-k8s-diff-port-679865
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_01_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:01:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-679865
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:01:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:01:35 +0000   Sat, 15 Nov 2025 10:01:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:01:35 +0000   Sat, 15 Nov 2025 10:01:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:01:35 +0000   Sat, 15 Nov 2025 10:01:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:01:35 +0000   Sat, 15 Nov 2025 10:01:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-679865
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                ba37645b-1855-4935-9368-1380eb8c0d66
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-wknnh                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-679865                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-7j4zt                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-679865             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-679865    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-qhrzp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-679865             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node default-k8s-diff-port-679865 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node default-k8s-diff-port-679865 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node default-k8s-diff-port-679865 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node default-k8s-diff-port-679865 event: Registered Node default-k8s-diff-port-679865 in Controller
	  Normal  NodeReady                14s   kubelet          Node default-k8s-diff-port-679865 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [39f659cf4042b7f5df3f26c21ba69dcc3fc59d98d75dd1ff51e7ca49be71c674] <==
	{"level":"warn","ts":"2025-11-15T10:01:01.525764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.534547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.542718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.551026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.558713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.566416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.574227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.581678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.588879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.596627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.604384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.612648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.620857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.636666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.644884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.653530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:01.707468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50168","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T10:01:13.973265Z","caller":"traceutil/trace.go:172","msg":"trace[233196062] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"106.730739ms","start":"2025-11-15T10:01:13.866510Z","end":"2025-11-15T10:01:13.973241Z","steps":["trace[233196062] 'process raft request'  (duration: 63.655786ms)","trace[233196062] 'compare'  (duration: 42.953381ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-15T10:01:14.727081Z","caller":"traceutil/trace.go:172","msg":"trace[939252306] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"191.876093ms","start":"2025-11-15T10:01:14.535186Z","end":"2025-11-15T10:01:14.727062Z","steps":["trace[939252306] 'process raft request'  (duration: 191.739484ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:01:15.133557Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.340778ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-11-15T10:01:15.133619Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.218185ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-679865\" limit:1 ","response":"range_response_count:1 size:5639"}
	{"level":"info","ts":"2025-11-15T10:01:15.133635Z","caller":"traceutil/trace.go:172","msg":"trace[438592710] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:386; }","duration":"122.44567ms","start":"2025-11-15T10:01:15.011174Z","end":"2025-11-15T10:01:15.133619Z","steps":["trace[438592710] 'range keys from in-memory index tree'  (duration: 122.257646ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:01:15.133665Z","caller":"traceutil/trace.go:172","msg":"trace[162800498] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-679865; range_end:; response_count:1; response_revision:386; }","duration":"148.275136ms","start":"2025-11-15T10:01:14.985379Z","end":"2025-11-15T10:01:15.133654Z","steps":["trace[162800498] 'range keys from in-memory index tree'  (duration: 148.091435ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-15T10:01:15.133619Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.264921ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-679865\" limit:1 ","response":"range_response_count:1 size:5038"}
	{"level":"info","ts":"2025-11-15T10:01:15.133763Z","caller":"traceutil/trace.go:172","msg":"trace[1645518760] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-679865; range_end:; response_count:1; response_revision:386; }","duration":"148.41963ms","start":"2025-11-15T10:01:14.985332Z","end":"2025-11-15T10:01:15.133752Z","steps":["trace[1645518760] 'range keys from in-memory index tree'  (duration: 148.123364ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:01:35 up  1:43,  0 user,  load average: 5.99, 3.31, 2.05
	Linux default-k8s-diff-port-679865 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [55d76a3a8edf077cce4e98d3b65b9cff01deb745628e9162eadf5d88bc09fb8c] <==
	I1115 10:01:10.988897       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:01:10.989146       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:01:10.989273       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:01:10.989288       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:01:10.989310       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:01:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:01:11.780360       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:01:11.780512       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:01:11.780530       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:01:11.780904       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:01:12.180765       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:01:12.180804       1 metrics.go:72] Registering metrics
	I1115 10:01:12.180863       1 controller.go:711] "Syncing nftables rules"
	I1115 10:01:21.282075       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:01:21.282137       1 main.go:301] handling current node
	I1115 10:01:31.283471       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:01:31.283503       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b9026d149a44922cbc5c4966f3ea15d79f44db08c2e43216812fe9eae2d03943] <==
	I1115 10:01:02.203569       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:01:02.204893       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:01:02.208450       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:01:02.208516       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1115 10:01:02.213522       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:01:02.214313       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:01:02.229627       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:01:03.108862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1115 10:01:03.113499       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1115 10:01:03.113525       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:01:03.622002       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:01:03.669313       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:01:03.810713       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1115 10:01:03.827330       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1115 10:01:03.829277       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:01:03.836661       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:01:04.120178       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:01:04.626834       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:01:04.635499       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1115 10:01:04.645818       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 10:01:09.822061       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1115 10:01:10.176817       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:01:10.227539       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:01:10.234094       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1115 10:01:33.577292       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:48738: use of closed network connection
	
	
	==> kube-controller-manager [bccb7c6189b4048d88716b6a5bfbedeb8d1fe284c7fa1bcac378f9f71fed3c56] <==
	I1115 10:01:09.079507       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1115 10:01:09.081689       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:01:09.083673       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-679865" podCIDRs=["10.244.0.0/24"]
	I1115 10:01:09.088778       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 10:01:09.118738       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:01:09.118882       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:01:09.119000       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:01:09.120140       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:01:09.120197       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:01:09.120217       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:01:09.120263       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1115 10:01:09.120306       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:01:09.120346       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1115 10:01:09.121535       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:01:09.121565       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:01:09.121598       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 10:01:09.122771       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:01:09.122808       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1115 10:01:09.130707       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:01:09.132851       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:01:09.140094       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:01:09.145382       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:01:09.145761       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:01:09.155047       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:01:24.072472       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ad22928ecc399a9ff6a91d6e41e990d0e3aeade56e4205ff9d2619413294c635] <==
	I1115 10:01:10.850055       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:01:10.917337       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:01:11.017986       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:01:11.018023       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 10:01:11.018112       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:01:11.037448       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:01:11.037522       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:01:11.042992       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:01:11.043438       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:01:11.043475       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:01:11.046949       1 config.go:200] "Starting service config controller"
	I1115 10:01:11.046972       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:01:11.046986       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:01:11.047002       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:01:11.047027       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:01:11.047038       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:01:11.047066       1 config.go:309] "Starting node config controller"
	I1115 10:01:11.047083       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:01:11.047092       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:01:11.147989       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:01:11.148005       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:01:11.148033       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [39c38fd744041fb80d302210c95779b5de1058f2b9581bf229625ee8bb4f5cbd] <==
	E1115 10:01:02.156122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:01:02.158810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:01:02.158999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:01:02.158579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:01:02.159483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:01:02.160105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:01:02.159796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:01:02.159557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:01:02.160332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:01:02.160491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:01:02.162640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:01:02.959856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:01:03.168199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:01:03.186592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:01:03.211977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 10:01:03.221104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:01:03.247375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:01:03.332386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:01:03.355702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:01:03.373878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:01:03.375818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:01:03.401099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:01:03.424371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:01:03.651505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1115 10:01:06.052306       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:01:09 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:09.133007    1315 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 15 10:01:09 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:09.893907    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt6zx\" (UniqueName: \"kubernetes.io/projected/bfa5f457-b12e-4e22-adc4-1f0194ab0339-kube-api-access-kt6zx\") pod \"kindnet-7j4zt\" (UID: \"bfa5f457-b12e-4e22-adc4-1f0194ab0339\") " pod="kube-system/kindnet-7j4zt"
	Nov 15 10:01:09 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:09.893961    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac94ddc3-4b28-4ca8-a5d5-877120496ee0-xtables-lock\") pod \"kube-proxy-qhrzp\" (UID: \"ac94ddc3-4b28-4ca8-a5d5-877120496ee0\") " pod="kube-system/kube-proxy-qhrzp"
	Nov 15 10:01:09 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:09.893986    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddfxk\" (UniqueName: \"kubernetes.io/projected/ac94ddc3-4b28-4ca8-a5d5-877120496ee0-kube-api-access-ddfxk\") pod \"kube-proxy-qhrzp\" (UID: \"ac94ddc3-4b28-4ca8-a5d5-877120496ee0\") " pod="kube-system/kube-proxy-qhrzp"
	Nov 15 10:01:09 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:09.894016    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bfa5f457-b12e-4e22-adc4-1f0194ab0339-cni-cfg\") pod \"kindnet-7j4zt\" (UID: \"bfa5f457-b12e-4e22-adc4-1f0194ab0339\") " pod="kube-system/kindnet-7j4zt"
	Nov 15 10:01:09 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:09.894151    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac94ddc3-4b28-4ca8-a5d5-877120496ee0-kube-proxy\") pod \"kube-proxy-qhrzp\" (UID: \"ac94ddc3-4b28-4ca8-a5d5-877120496ee0\") " pod="kube-system/kube-proxy-qhrzp"
	Nov 15 10:01:09 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:09.894193    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac94ddc3-4b28-4ca8-a5d5-877120496ee0-lib-modules\") pod \"kube-proxy-qhrzp\" (UID: \"ac94ddc3-4b28-4ca8-a5d5-877120496ee0\") " pod="kube-system/kube-proxy-qhrzp"
	Nov 15 10:01:09 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:09.894218    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfa5f457-b12e-4e22-adc4-1f0194ab0339-xtables-lock\") pod \"kindnet-7j4zt\" (UID: \"bfa5f457-b12e-4e22-adc4-1f0194ab0339\") " pod="kube-system/kindnet-7j4zt"
	Nov 15 10:01:09 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:09.894239    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfa5f457-b12e-4e22-adc4-1f0194ab0339-lib-modules\") pod \"kindnet-7j4zt\" (UID: \"bfa5f457-b12e-4e22-adc4-1f0194ab0339\") " pod="kube-system/kindnet-7j4zt"
	Nov 15 10:01:10 default-k8s-diff-port-679865 kubelet[1315]: E1115 10:01:10.002157    1315 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 15 10:01:10 default-k8s-diff-port-679865 kubelet[1315]: E1115 10:01:10.002198    1315 projected.go:196] Error preparing data for projected volume kube-api-access-kt6zx for pod kube-system/kindnet-7j4zt: configmap "kube-root-ca.crt" not found
	Nov 15 10:01:10 default-k8s-diff-port-679865 kubelet[1315]: E1115 10:01:10.002270    1315 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bfa5f457-b12e-4e22-adc4-1f0194ab0339-kube-api-access-kt6zx podName:bfa5f457-b12e-4e22-adc4-1f0194ab0339 nodeName:}" failed. No retries permitted until 2025-11-15 10:01:10.502244435 +0000 UTC m=+6.115244676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kt6zx" (UniqueName: "kubernetes.io/projected/bfa5f457-b12e-4e22-adc4-1f0194ab0339-kube-api-access-kt6zx") pod "kindnet-7j4zt" (UID: "bfa5f457-b12e-4e22-adc4-1f0194ab0339") : configmap "kube-root-ca.crt" not found
	Nov 15 10:01:10 default-k8s-diff-port-679865 kubelet[1315]: E1115 10:01:10.002160    1315 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 15 10:01:10 default-k8s-diff-port-679865 kubelet[1315]: E1115 10:01:10.002303    1315 projected.go:196] Error preparing data for projected volume kube-api-access-ddfxk for pod kube-system/kube-proxy-qhrzp: configmap "kube-root-ca.crt" not found
	Nov 15 10:01:10 default-k8s-diff-port-679865 kubelet[1315]: E1115 10:01:10.002370    1315 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ac94ddc3-4b28-4ca8-a5d5-877120496ee0-kube-api-access-ddfxk podName:ac94ddc3-4b28-4ca8-a5d5-877120496ee0 nodeName:}" failed. No retries permitted until 2025-11-15 10:01:10.502348945 +0000 UTC m=+6.115349211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ddfxk" (UniqueName: "kubernetes.io/projected/ac94ddc3-4b28-4ca8-a5d5-877120496ee0-kube-api-access-ddfxk") pod "kube-proxy-qhrzp" (UID: "ac94ddc3-4b28-4ca8-a5d5-877120496ee0") : configmap "kube-root-ca.crt" not found
	Nov 15 10:01:11 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:11.539167    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7j4zt" podStartSLOduration=2.539145205 podStartE2EDuration="2.539145205s" podCreationTimestamp="2025-11-15 10:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:01:11.538919061 +0000 UTC m=+7.151919345" watchObservedRunningTime="2025-11-15 10:01:11.539145205 +0000 UTC m=+7.152145458"
	Nov 15 10:01:11 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:11.549551    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qhrzp" podStartSLOduration=2.549529708 podStartE2EDuration="2.549529708s" podCreationTimestamp="2025-11-15 10:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:01:11.549333025 +0000 UTC m=+7.162333288" watchObservedRunningTime="2025-11-15 10:01:11.549529708 +0000 UTC m=+7.162529970"
	Nov 15 10:01:21 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:21.825387    1315 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 15 10:01:21 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:21.885640    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7524\" (UniqueName: \"kubernetes.io/projected/991ed950-4b2d-40bb-ba38-aeda29531470-kube-api-access-t7524\") pod \"storage-provisioner\" (UID: \"991ed950-4b2d-40bb-ba38-aeda29531470\") " pod="kube-system/storage-provisioner"
	Nov 15 10:01:21 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:21.885695    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f27efb7b-b010-4799-b514-de73041c10ed-config-volume\") pod \"coredns-66bc5c9577-wknnh\" (UID: \"f27efb7b-b010-4799-b514-de73041c10ed\") " pod="kube-system/coredns-66bc5c9577-wknnh"
	Nov 15 10:01:21 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:21.885721    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2pgw\" (UniqueName: \"kubernetes.io/projected/f27efb7b-b010-4799-b514-de73041c10ed-kube-api-access-v2pgw\") pod \"coredns-66bc5c9577-wknnh\" (UID: \"f27efb7b-b010-4799-b514-de73041c10ed\") " pod="kube-system/coredns-66bc5c9577-wknnh"
	Nov 15 10:01:21 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:21.885752    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/991ed950-4b2d-40bb-ba38-aeda29531470-tmp\") pod \"storage-provisioner\" (UID: \"991ed950-4b2d-40bb-ba38-aeda29531470\") " pod="kube-system/storage-provisioner"
	Nov 15 10:01:22 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:22.556470    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.556446915 podStartE2EDuration="12.556446915s" podCreationTimestamp="2025-11-15 10:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:01:22.556116561 +0000 UTC m=+18.169116860" watchObservedRunningTime="2025-11-15 10:01:22.556446915 +0000 UTC m=+18.169447178"
	Nov 15 10:01:22 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:22.566010    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wknnh" podStartSLOduration=12.565989473 podStartE2EDuration="12.565989473s" podCreationTimestamp="2025-11-15 10:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-15 10:01:22.565588009 +0000 UTC m=+18.178588253" watchObservedRunningTime="2025-11-15 10:01:22.565989473 +0000 UTC m=+18.178989737"
	Nov 15 10:01:24 default-k8s-diff-port-679865 kubelet[1315]: I1115 10:01:24.502444    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkhhs\" (UniqueName: \"kubernetes.io/projected/cac86649-71f6-4c8c-b775-c310a8db63bc-kube-api-access-kkhhs\") pod \"busybox\" (UID: \"cac86649-71f6-4c8c-b775-c310a8db63bc\") " pod="default/busybox"
	
	
	==> storage-provisioner [91ea7002c24de573200d1eab0787570c42b52fd9c7aeb7defc06a999a9cf4246] <==
	I1115 10:01:22.214125       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:01:22.222842       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:01:22.222908       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:01:22.225715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:22.230832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:01:22.231011       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:01:22.231112       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"30a7c389-2335-4677-b5bc-b5dcc414ee67", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-679865_b89f89ea-99bd-42c7-bf74-8f1e4fbaf26a became leader
	I1115 10:01:22.231183       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-679865_b89f89ea-99bd-42c7-bf74-8f1e4fbaf26a!
	W1115 10:01:22.234111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:22.238297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:01:22.332161       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-679865_b89f89ea-99bd-42c7-bf74-8f1e4fbaf26a!
	W1115 10:01:24.245455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:24.255721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:26.259514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:26.265382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:28.268921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:28.273369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:30.277262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:30.282906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:32.286907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:32.291437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:34.294288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:01:34.301417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-679865 -n default-k8s-diff-port-679865
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-679865 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-783113 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-783113 --alsologtostderr -v=1: exit status 80 (2.01274679s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-783113 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:01:49.758077  633444 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:01:49.758242  633444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:01:49.758254  633444 out.go:374] Setting ErrFile to fd 2...
	I1115 10:01:49.758259  633444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:01:49.758473  633444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:01:49.758754  633444 out.go:368] Setting JSON to false
	I1115 10:01:49.758828  633444 mustload.go:66] Loading cluster: newest-cni-783113
	I1115 10:01:49.759234  633444 config.go:182] Loaded profile config "newest-cni-783113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:49.759692  633444 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:49.780233  633444 host.go:66] Checking if "newest-cni-783113" exists ...
	I1115 10:01:49.780559  633444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:01:49.842059  633444 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:86 OomKillDisable:false NGoroutines:89 SystemTime:2025-11-15 10:01:49.830950464 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:01:49.842966  633444 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-783113 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:01:49.844829  633444 out.go:179] * Pausing node newest-cni-783113 ... 
	I1115 10:01:49.845989  633444 host.go:66] Checking if "newest-cni-783113" exists ...
	I1115 10:01:49.846240  633444 ssh_runner.go:195] Run: systemctl --version
	I1115 10:01:49.846277  633444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:49.864239  633444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:49.957842  633444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:01:49.971137  633444 pause.go:52] kubelet running: true
	I1115 10:01:49.971242  633444 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:01:50.128701  633444 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:01:50.128790  633444 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:01:50.196933  633444 cri.go:89] found id: "3ad1b9ceb1dbf75e014776fa482c1eba37c87d155fc5b52311a23c67ad452966"
	I1115 10:01:50.196962  633444 cri.go:89] found id: "177965edad35ab7bc4ac03ef33d5c8ac0548da2de0546df9d0b1167b6823c792"
	I1115 10:01:50.196968  633444 cri.go:89] found id: "b347dba9b065dbc9ab312f9e85bb5958e47274c599716dc75f0de2924b9e3277"
	I1115 10:01:50.196973  633444 cri.go:89] found id: "9409cc92c0e96c6895a87fb31f50ae5a740a26c9e4370bfc6e46f8f7dd07e7a7"
	I1115 10:01:50.196977  633444 cri.go:89] found id: "5f919a2e9786b1d58ad021f0e0907f1c99dc24c7a50298e330d71f4da52c9e03"
	I1115 10:01:50.196981  633444 cri.go:89] found id: "85cc4b53b288933ecd9863c2e7cd92befe5f1dffe99dfce282a0efb376cc5e26"
	I1115 10:01:50.196985  633444 cri.go:89] found id: ""
	I1115 10:01:50.197027  633444 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:01:50.210070  633444 retry.go:31] will retry after 180.366104ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:01:50Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:01:50.391610  633444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:01:50.405226  633444 pause.go:52] kubelet running: false
	I1115 10:01:50.405291  633444 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:01:50.532730  633444 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:01:50.532824  633444 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:01:50.603995  633444 cri.go:89] found id: "3ad1b9ceb1dbf75e014776fa482c1eba37c87d155fc5b52311a23c67ad452966"
	I1115 10:01:50.604016  633444 cri.go:89] found id: "177965edad35ab7bc4ac03ef33d5c8ac0548da2de0546df9d0b1167b6823c792"
	I1115 10:01:50.604020  633444 cri.go:89] found id: "b347dba9b065dbc9ab312f9e85bb5958e47274c599716dc75f0de2924b9e3277"
	I1115 10:01:50.604024  633444 cri.go:89] found id: "9409cc92c0e96c6895a87fb31f50ae5a740a26c9e4370bfc6e46f8f7dd07e7a7"
	I1115 10:01:50.604026  633444 cri.go:89] found id: "5f919a2e9786b1d58ad021f0e0907f1c99dc24c7a50298e330d71f4da52c9e03"
	I1115 10:01:50.604030  633444 cri.go:89] found id: "85cc4b53b288933ecd9863c2e7cd92befe5f1dffe99dfce282a0efb376cc5e26"
	I1115 10:01:50.604032  633444 cri.go:89] found id: ""
	I1115 10:01:50.604084  633444 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:01:50.616342  633444 retry.go:31] will retry after 233.10204ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:01:50Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:01:50.849774  633444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:01:50.867555  633444 pause.go:52] kubelet running: false
	I1115 10:01:50.867620  633444 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:01:50.981183  633444 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:01:50.981267  633444 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:01:51.049557  633444 cri.go:89] found id: "3ad1b9ceb1dbf75e014776fa482c1eba37c87d155fc5b52311a23c67ad452966"
	I1115 10:01:51.049582  633444 cri.go:89] found id: "177965edad35ab7bc4ac03ef33d5c8ac0548da2de0546df9d0b1167b6823c792"
	I1115 10:01:51.049587  633444 cri.go:89] found id: "b347dba9b065dbc9ab312f9e85bb5958e47274c599716dc75f0de2924b9e3277"
	I1115 10:01:51.049591  633444 cri.go:89] found id: "9409cc92c0e96c6895a87fb31f50ae5a740a26c9e4370bfc6e46f8f7dd07e7a7"
	I1115 10:01:51.049594  633444 cri.go:89] found id: "5f919a2e9786b1d58ad021f0e0907f1c99dc24c7a50298e330d71f4da52c9e03"
	I1115 10:01:51.049599  633444 cri.go:89] found id: "85cc4b53b288933ecd9863c2e7cd92befe5f1dffe99dfce282a0efb376cc5e26"
	I1115 10:01:51.049603  633444 cri.go:89] found id: ""
	I1115 10:01:51.049651  633444 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:01:51.061780  633444 retry.go:31] will retry after 426.071297ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:01:51Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:01:51.488109  633444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:01:51.501972  633444 pause.go:52] kubelet running: false
	I1115 10:01:51.502040  633444 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:01:51.618026  633444 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:01:51.618097  633444 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:01:51.684958  633444 cri.go:89] found id: "3ad1b9ceb1dbf75e014776fa482c1eba37c87d155fc5b52311a23c67ad452966"
	I1115 10:01:51.684983  633444 cri.go:89] found id: "177965edad35ab7bc4ac03ef33d5c8ac0548da2de0546df9d0b1167b6823c792"
	I1115 10:01:51.684989  633444 cri.go:89] found id: "b347dba9b065dbc9ab312f9e85bb5958e47274c599716dc75f0de2924b9e3277"
	I1115 10:01:51.684993  633444 cri.go:89] found id: "9409cc92c0e96c6895a87fb31f50ae5a740a26c9e4370bfc6e46f8f7dd07e7a7"
	I1115 10:01:51.684996  633444 cri.go:89] found id: "5f919a2e9786b1d58ad021f0e0907f1c99dc24c7a50298e330d71f4da52c9e03"
	I1115 10:01:51.685000  633444 cri.go:89] found id: "85cc4b53b288933ecd9863c2e7cd92befe5f1dffe99dfce282a0efb376cc5e26"
	I1115 10:01:51.685004  633444 cri.go:89] found id: ""
	I1115 10:01:51.685054  633444 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:01:51.699194  633444 out.go:203] 
	W1115 10:01:51.700451  633444 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:01:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:01:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:01:51.700472  633444 out.go:285] * 
	* 
	W1115 10:01:51.705301  633444 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:01:51.706439  633444 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-783113 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-783113
helpers_test.go:243: (dbg) docker inspect newest-cni-783113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940",
	        "Created": "2025-11-15T10:01:00.281154454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 630484,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:01:36.825179649Z",
	            "FinishedAt": "2025-11-15T10:01:35.884417594Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940/hostname",
	        "HostsPath": "/var/lib/docker/containers/0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940/hosts",
	        "LogPath": "/var/lib/docker/containers/0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940/0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940-json.log",
	        "Name": "/newest-cni-783113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-783113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-783113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940",
	                "LowerDir": "/var/lib/docker/overlay2/adf1e197b96e4bdc3adefbdfad4bf35a60d874784fe2ff099ee9fda65e08bccc-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/adf1e197b96e4bdc3adefbdfad4bf35a60d874784fe2ff099ee9fda65e08bccc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/adf1e197b96e4bdc3adefbdfad4bf35a60d874784fe2ff099ee9fda65e08bccc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/adf1e197b96e4bdc3adefbdfad4bf35a60d874784fe2ff099ee9fda65e08bccc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-783113",
	                "Source": "/var/lib/docker/volumes/newest-cni-783113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-783113",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-783113",
	                "name.minikube.sigs.k8s.io": "newest-cni-783113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6c8b543d3a43190d8c7c440ebcebc1986eb3bc50ea35cd29673f75594c094431",
	            "SandboxKey": "/var/run/docker/netns/6c8b543d3a43",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-783113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5154d9a0ce32378165efc274699868177016a3c20c41bacb01c1c35fc0b5949c",
	                    "EndpointID": "905c514275da5be7629c1b09804a3be8b657da653b732fb1e95c62c3da0a95d1",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "22:7f:29:19:4a:b2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-783113",
	                        "0ac6b2197ead"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-783113 -n newest-cni-783113
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-783113 -n newest-cni-783113: exit status 2 (352.051034ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-783113 logs -n 25
I1115 10:01:52.216568  359063 config.go:182] Loaded profile config "auto-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-335655                                                                                                                                                                                                                     │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p old-k8s-version-335655                                                                                                                                                                                                                     │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p disable-driver-mounts-553319                                                                                                                                                                                                               │ disable-driver-mounts-553319 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p default-k8s-diff-port-679865 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:01 UTC │
	│ image   │ no-preload-559401 image list --format=json                                                                                                                                                                                                    │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ pause   │ -p no-preload-559401 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ delete  │ -p no-preload-559401                                                                                                                                                                                                                          │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p no-preload-559401                                                                                                                                                                                                                          │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p newest-cni-783113 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:01 UTC │
	│ start   │ -p cert-expiration-341243 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-341243       │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-430513 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ stop    │ -p embed-certs-430513 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ delete  │ -p cert-expiration-341243                                                                                                                                                                                                                     │ cert-expiration-341243       │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ start   │ -p auto-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-430513 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ start   │ -p embed-certs-430513 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-783113 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ stop    │ -p newest-cni-783113 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-679865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-679865 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-783113 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ start   │ -p newest-cni-783113 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ image   │ newest-cni-783113 image list --format=json                                                                                                                                                                                                    │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ pause   │ -p newest-cni-783113 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ ssh     │ -p auto-034018 pgrep -a kubelet                                                                                                                                                                                                               │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:01:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:01:36.569942  630269 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:01:36.570076  630269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:01:36.570087  630269 out.go:374] Setting ErrFile to fd 2...
	I1115 10:01:36.570091  630269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:01:36.570283  630269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:01:36.570795  630269 out.go:368] Setting JSON to false
	I1115 10:01:36.571916  630269 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6238,"bootTime":1763194659,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:01:36.572027  630269 start.go:143] virtualization: kvm guest
	I1115 10:01:36.573679  630269 out.go:179] * [newest-cni-783113] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:01:36.574734  630269 notify.go:221] Checking for updates...
	I1115 10:01:36.574784  630269 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:01:36.575817  630269 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:01:36.577012  630269 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:01:36.578405  630269 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 10:01:36.579522  630269 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:01:36.580675  630269 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:01:36.582157  630269 config.go:182] Loaded profile config "newest-cni-783113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:36.582710  630269 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:01:36.607471  630269 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:01:36.607574  630269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:01:36.670675  630269 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-15 10:01:36.658557671 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:01:36.670832  630269 docker.go:319] overlay module found
	I1115 10:01:36.672505  630269 out.go:179] * Using the docker driver based on existing profile
	I1115 10:01:32.665429  622837 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:01:32.670534  622837 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:01:32.670559  622837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:01:32.685153  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:01:33.043700  622837 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:01:33.043783  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:33.043877  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-034018 minikube.k8s.io/updated_at=2025_11_15T10_01_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=auto-034018 minikube.k8s.io/primary=true
	I1115 10:01:33.140260  622837 ops.go:34] apiserver oom_adj: -16
	I1115 10:01:33.140433  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:33.641139  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:34.141504  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:34.640510  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:35.141167  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:35.641547  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:36.141213  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:36.640578  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:36.673598  630269 start.go:309] selected driver: docker
	I1115 10:01:36.673617  630269 start.go:930] validating driver "docker" against &{Name:newest-cni-783113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-783113 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:01:36.673747  630269 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:01:36.674601  630269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:01:36.747670  630269 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-15 10:01:36.737376432 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:01:36.748046  630269 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:01:36.748079  630269 cni.go:84] Creating CNI manager for ""
	I1115 10:01:36.748145  630269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:01:36.748212  630269 start.go:353] cluster config:
	{Name:newest-cni-783113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-783113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:01:36.750705  630269 out.go:179] * Starting "newest-cni-783113" primary control-plane node in "newest-cni-783113" cluster
	I1115 10:01:36.751865  630269 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:01:36.753066  630269 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:01:36.754347  630269 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:01:36.754405  630269 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:01:36.754431  630269 cache.go:65] Caching tarball of preloaded images
	I1115 10:01:36.754452  630269 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:01:36.754572  630269 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:01:36.754589  630269 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:01:36.754709  630269 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/config.json ...
	I1115 10:01:36.777265  630269 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:01:36.777288  630269 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:01:36.777310  630269 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:01:36.777350  630269 start.go:360] acquireMachinesLock for newest-cni-783113: {Name:mkf30ab080def5f7c46d57225f0ee495d461161f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:01:36.777475  630269 start.go:364] duration metric: took 97.184µs to acquireMachinesLock for "newest-cni-783113"
	I1115 10:01:36.777511  630269 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:01:36.777517  630269 fix.go:54] fixHost starting: 
	I1115 10:01:36.777740  630269 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:36.795826  630269 fix.go:112] recreateIfNeeded on newest-cni-783113: state=Stopped err=<nil>
	W1115 10:01:36.795905  630269 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:01:37.140547  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:37.641198  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:37.720238  622837 kubeadm.go:1114] duration metric: took 4.676515687s to wait for elevateKubeSystemPrivileges
	I1115 10:01:37.720278  622837 kubeadm.go:403] duration metric: took 17.004691232s to StartCluster
	I1115 10:01:37.720303  622837 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:37.720386  622837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:01:37.722727  622837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:37.723087  622837 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:01:37.723142  622837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:01:37.723207  622837 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:01:37.723359  622837 addons.go:70] Setting storage-provisioner=true in profile "auto-034018"
	I1115 10:01:37.723372  622837 config.go:182] Loaded profile config "auto-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:37.723374  622837 addons.go:70] Setting default-storageclass=true in profile "auto-034018"
	I1115 10:01:37.723385  622837 addons.go:239] Setting addon storage-provisioner=true in "auto-034018"
	I1115 10:01:37.723441  622837 host.go:66] Checking if "auto-034018" exists ...
	I1115 10:01:37.723428  622837 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-034018"
	I1115 10:01:37.723844  622837 cli_runner.go:164] Run: docker container inspect auto-034018 --format={{.State.Status}}
	I1115 10:01:37.724007  622837 cli_runner.go:164] Run: docker container inspect auto-034018 --format={{.State.Status}}
	I1115 10:01:37.727654  622837 out.go:179] * Verifying Kubernetes components...
	I1115 10:01:37.728929  622837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:37.749210  622837 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:01:37.750128  622837 addons.go:239] Setting addon default-storageclass=true in "auto-034018"
	I1115 10:01:37.750181  622837 host.go:66] Checking if "auto-034018" exists ...
	I1115 10:01:37.750694  622837 cli_runner.go:164] Run: docker container inspect auto-034018 --format={{.State.Status}}
	I1115 10:01:37.751748  622837 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:01:37.751834  622837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:01:37.751919  622837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-034018
	I1115 10:01:37.787632  622837 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:01:37.787726  622837 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:01:37.787825  622837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-034018
	I1115 10:01:37.787899  622837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/auto-034018/id_rsa Username:docker}
	I1115 10:01:37.816385  622837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/auto-034018/id_rsa Username:docker}
	I1115 10:01:37.840217  622837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:01:37.888164  622837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:01:37.901792  622837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:01:37.934083  622837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:01:38.027888  622837 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1115 10:01:38.029330  622837 node_ready.go:35] waiting up to 15m0s for node "auto-034018" to be "Ready" ...
	I1115 10:01:38.216550  622837 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:01:35.312760  625726 addons.go:515] duration metric: took 2.588255477s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1115 10:01:35.793239  625726 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:01:35.799079  625726 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:01:35.799109  625726 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:01:36.292687  625726 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:01:36.297514  625726 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 10:01:36.298524  625726 api_server.go:141] control plane version: v1.34.1
	I1115 10:01:36.298558  625726 api_server.go:131] duration metric: took 1.006274865s to wait for apiserver health ...
	I1115 10:01:36.298569  625726 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:01:36.302680  625726 system_pods.go:59] 8 kube-system pods found
	I1115 10:01:36.302726  625726 system_pods.go:61] "coredns-66bc5c9577-6gvgh" [605418c0-0b25-478e-bc97-875523469f50] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:01:36.302738  625726 system_pods.go:61] "etcd-embed-certs-430513" [c811a4dd-480d-4848-8c3b-15a0518be2d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:01:36.302747  625726 system_pods.go:61] "kindnet-h26k6" [01c61aeb-fa93-4a50-b032-f52dbb9215a4] Running
	I1115 10:01:36.302756  625726 system_pods.go:61] "kube-apiserver-embed-certs-430513" [8bdbd8f0-db7a-429c-8046-a248edbe5e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:01:36.302763  625726 system_pods.go:61] "kube-controller-manager-embed-certs-430513" [78c3f3b5-1c2a-4af4-9e25-95f4bf9fe86a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:01:36.302770  625726 system_pods.go:61] "kube-proxy-kd7wd" [27ddf833-a045-40a5-9220-9cbae8dd4875] Running
	I1115 10:01:36.302778  625726 system_pods.go:61] "kube-scheduler-embed-certs-430513" [eef0520d-ea72-42ca-b035-13ebbfa74df0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:01:36.302783  625726 system_pods.go:61] "storage-provisioner" [a1e774e7-a59e-4d09-abca-2a71de44c919] Running
	I1115 10:01:36.302801  625726 system_pods.go:74] duration metric: took 4.216678ms to wait for pod list to return data ...
	I1115 10:01:36.302815  625726 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:01:36.305715  625726 default_sa.go:45] found service account: "default"
	I1115 10:01:36.305735  625726 default_sa.go:55] duration metric: took 2.912812ms for default service account to be created ...
	I1115 10:01:36.305742  625726 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:01:36.308894  625726 system_pods.go:86] 8 kube-system pods found
	I1115 10:01:36.308946  625726 system_pods.go:89] "coredns-66bc5c9577-6gvgh" [605418c0-0b25-478e-bc97-875523469f50] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:01:36.308958  625726 system_pods.go:89] "etcd-embed-certs-430513" [c811a4dd-480d-4848-8c3b-15a0518be2d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:01:36.308975  625726 system_pods.go:89] "kindnet-h26k6" [01c61aeb-fa93-4a50-b032-f52dbb9215a4] Running
	I1115 10:01:36.308986  625726 system_pods.go:89] "kube-apiserver-embed-certs-430513" [8bdbd8f0-db7a-429c-8046-a248edbe5e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:01:36.308999  625726 system_pods.go:89] "kube-controller-manager-embed-certs-430513" [78c3f3b5-1c2a-4af4-9e25-95f4bf9fe86a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:01:36.309009  625726 system_pods.go:89] "kube-proxy-kd7wd" [27ddf833-a045-40a5-9220-9cbae8dd4875] Running
	I1115 10:01:36.309022  625726 system_pods.go:89] "kube-scheduler-embed-certs-430513" [eef0520d-ea72-42ca-b035-13ebbfa74df0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:01:36.309030  625726 system_pods.go:89] "storage-provisioner" [a1e774e7-a59e-4d09-abca-2a71de44c919] Running
	I1115 10:01:36.309041  625726 system_pods.go:126] duration metric: took 3.292467ms to wait for k8s-apps to be running ...
	I1115 10:01:36.309052  625726 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:01:36.309101  625726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:01:36.325617  625726 system_svc.go:56] duration metric: took 16.554722ms WaitForService to wait for kubelet
	I1115 10:01:36.325648  625726 kubeadm.go:587] duration metric: took 3.6011784s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:01:36.325670  625726 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:01:36.328907  625726 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:01:36.328940  625726 node_conditions.go:123] node cpu capacity is 8
	I1115 10:01:36.328957  625726 node_conditions.go:105] duration metric: took 3.28114ms to run NodePressure ...
	I1115 10:01:36.328975  625726 start.go:242] waiting for startup goroutines ...
	I1115 10:01:36.329002  625726 start.go:247] waiting for cluster config update ...
	I1115 10:01:36.329021  625726 start.go:256] writing updated cluster config ...
	I1115 10:01:36.329322  625726 ssh_runner.go:195] Run: rm -f paused
	I1115 10:01:36.333314  625726 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:01:36.336969  625726 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6gvgh" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:01:38.342670  625726 pod_ready.go:104] pod "coredns-66bc5c9577-6gvgh" is not "Ready", error: <nil>
	I1115 10:01:36.798075  630269 out.go:252] * Restarting existing docker container for "newest-cni-783113" ...
	I1115 10:01:36.798171  630269 cli_runner.go:164] Run: docker start newest-cni-783113
	I1115 10:01:37.111970  630269 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:37.131388  630269 kic.go:430] container "newest-cni-783113" state is running.
	I1115 10:01:37.131919  630269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-783113
	I1115 10:01:37.153793  630269 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/config.json ...
	I1115 10:01:37.154102  630269 machine.go:94] provisionDockerMachine start ...
	I1115 10:01:37.154181  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:37.178043  630269 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:37.178383  630269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1115 10:01:37.178408  630269 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:01:37.179167  630269 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38172->127.0.0.1:33474: read: connection reset by peer
	I1115 10:01:40.324578  630269 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-783113
	
	I1115 10:01:40.324617  630269 ubuntu.go:182] provisioning hostname "newest-cni-783113"
	I1115 10:01:40.324689  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:40.350677  630269 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:40.350959  630269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1115 10:01:40.350973  630269 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-783113 && echo "newest-cni-783113" | sudo tee /etc/hostname
	I1115 10:01:40.508764  630269 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-783113
	
	I1115 10:01:40.508844  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:40.532796  630269 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:40.533120  630269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1115 10:01:40.533149  630269 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-783113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-783113/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-783113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:01:40.680041  630269 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:01:40.680077  630269 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 10:01:40.680102  630269 ubuntu.go:190] setting up certificates
	I1115 10:01:40.680116  630269 provision.go:84] configureAuth start
	I1115 10:01:40.680180  630269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-783113
	I1115 10:01:40.702046  630269 provision.go:143] copyHostCerts
	I1115 10:01:40.702129  630269 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 10:01:40.702156  630269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 10:01:40.702239  630269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 10:01:40.702431  630269 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 10:01:40.702447  630269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 10:01:40.702499  630269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 10:01:40.702621  630269 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 10:01:40.702633  630269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 10:01:40.702672  630269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 10:01:40.702753  630269 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.newest-cni-783113 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-783113]
	I1115 10:01:41.272872  630269 provision.go:177] copyRemoteCerts
	I1115 10:01:41.272953  630269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:01:41.273006  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:41.295984  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:41.400460  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:01:41.423927  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:01:41.447638  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:01:41.470623  630269 provision.go:87] duration metric: took 790.487808ms to configureAuth
	I1115 10:01:41.470658  630269 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:01:41.470860  630269 config.go:182] Loaded profile config "newest-cni-783113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:41.470997  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:41.495469  630269 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:41.495829  630269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1115 10:01:41.495852  630269 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:01:38.217739  622837 addons.go:515] duration metric: took 494.531476ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:01:38.532575  622837 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-034018" context rescaled to 1 replicas
	W1115 10:01:40.033169  622837 node_ready.go:57] node "auto-034018" has "Ready":"False" status (will retry)
	I1115 10:01:41.823347  630269 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:01:41.823381  630269 machine.go:97] duration metric: took 4.669257717s to provisionDockerMachine
	I1115 10:01:41.823411  630269 start.go:293] postStartSetup for "newest-cni-783113" (driver="docker")
	I1115 10:01:41.823425  630269 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:01:41.823521  630269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:01:41.823589  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:41.848894  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:41.953441  630269 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:01:41.958096  630269 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:01:41.958148  630269 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:01:41.958160  630269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 10:01:41.958210  630269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 10:01:41.958302  630269 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 10:01:41.958425  630269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:01:41.969169  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:01:41.992980  630269 start.go:296] duration metric: took 169.549789ms for postStartSetup
	I1115 10:01:41.993090  630269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:01:41.993141  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:42.016721  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:42.120401  630269 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:01:42.126863  630269 fix.go:56] duration metric: took 5.34933661s for fixHost
	I1115 10:01:42.126892  630269 start.go:83] releasing machines lock for "newest-cni-783113", held for 5.349393456s
	I1115 10:01:42.126965  630269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-783113
	I1115 10:01:42.150208  630269 ssh_runner.go:195] Run: cat /version.json
	I1115 10:01:42.150275  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:42.150298  630269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:01:42.150386  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:42.173111  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:42.173729  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:42.352356  630269 ssh_runner.go:195] Run: systemctl --version
	I1115 10:01:42.361968  630269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:01:42.407310  630269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:01:42.413943  630269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:01:42.414031  630269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:01:42.424746  630269 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:01:42.424780  630269 start.go:496] detecting cgroup driver to use...
	I1115 10:01:42.424817  630269 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 10:01:42.424868  630269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:01:42.447299  630269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:01:42.464461  630269 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:01:42.464532  630269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:01:42.486230  630269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:01:42.499738  630269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:01:42.590820  630269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:01:42.681689  630269 docker.go:234] disabling docker service ...
	I1115 10:01:42.681760  630269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:01:42.699140  630269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:01:42.714382  630269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:01:42.814410  630269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:01:42.923145  630269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:01:42.943266  630269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:01:42.964849  630269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:01:42.964916  630269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:42.977953  630269 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 10:01:42.978026  630269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:42.990913  630269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:43.004573  630269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:43.018449  630269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:01:43.030252  630269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:43.044185  630269 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:43.056944  630269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:43.070140  630269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:01:43.081364  630269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:01:43.092629  630269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:43.216556  630269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:01:44.266669  630269 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.050078549s)
	I1115 10:01:44.266695  630269 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:01:44.266738  630269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:01:44.271973  630269 start.go:564] Will wait 60s for crictl version
	I1115 10:01:44.272033  630269 ssh_runner.go:195] Run: which crictl
	I1115 10:01:44.275884  630269 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:01:44.308189  630269 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:01:44.308263  630269 ssh_runner.go:195] Run: crio --version
	I1115 10:01:44.346982  630269 ssh_runner.go:195] Run: crio --version
	I1115 10:01:44.391434  630269 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:01:44.392786  630269 cli_runner.go:164] Run: docker network inspect newest-cni-783113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:01:44.416032  630269 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 10:01:44.422091  630269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:01:44.441506  630269 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1115 10:01:40.343919  625726 pod_ready.go:104] pod "coredns-66bc5c9577-6gvgh" is not "Ready", error: <nil>
	W1115 10:01:42.843258  625726 pod_ready.go:104] pod "coredns-66bc5c9577-6gvgh" is not "Ready", error: <nil>
	W1115 10:01:44.843888  625726 pod_ready.go:104] pod "coredns-66bc5c9577-6gvgh" is not "Ready", error: <nil>
	I1115 10:01:44.442993  630269 kubeadm.go:884] updating cluster {Name:newest-cni-783113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-783113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:01:44.443181  630269 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:01:44.443265  630269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:01:44.487040  630269 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:01:44.487061  630269 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:01:44.487103  630269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:01:44.519622  630269 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:01:44.519646  630269 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:01:44.519658  630269 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 10:01:44.519802  630269 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-783113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-783113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:01:44.519909  630269 ssh_runner.go:195] Run: crio config
	I1115 10:01:44.581593  630269 cni.go:84] Creating CNI manager for ""
	I1115 10:01:44.581621  630269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:01:44.581646  630269 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 10:01:44.581679  630269 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-783113 NodeName:newest-cni-783113 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:01:44.581873  630269 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-783113"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:01:44.581962  630269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:01:44.592094  630269 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:01:44.592162  630269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:01:44.601958  630269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:01:44.616915  630269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:01:44.633820  630269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1115 10:01:44.649367  630269 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:01:44.654293  630269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:01:44.667589  630269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:44.778596  630269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:01:44.808420  630269 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113 for IP: 192.168.103.2
	I1115 10:01:44.808441  630269 certs.go:195] generating shared ca certs ...
	I1115 10:01:44.808458  630269 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:44.808625  630269 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 10:01:44.808701  630269 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 10:01:44.808721  630269 certs.go:257] generating profile certs ...
	I1115 10:01:44.808837  630269 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/client.key
	I1115 10:01:44.808925  630269 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/apiserver.key.93e7bed8
	I1115 10:01:44.808987  630269 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/proxy-client.key
	I1115 10:01:44.809144  630269 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 10:01:44.809191  630269 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 10:01:44.809207  630269 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:01:44.809246  630269 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:01:44.809281  630269 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:01:44.809313  630269 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 10:01:44.809370  630269 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:01:44.810083  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:01:44.831783  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:01:44.852658  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:01:44.873910  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:01:44.899604  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:01:44.921251  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:01:44.940567  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:01:44.959620  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:01:44.979553  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:01:44.999671  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 10:01:45.019836  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 10:01:45.040042  630269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:01:45.054101  630269 ssh_runner.go:195] Run: openssl version
	I1115 10:01:45.060561  630269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:01:45.069731  630269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:01:45.073828  630269 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:01:45.073897  630269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:01:45.114642  630269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:01:45.123783  630269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 10:01:45.133086  630269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 10:01:45.137207  630269 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 10:01:45.137270  630269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 10:01:45.175795  630269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 10:01:45.185027  630269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 10:01:45.194349  630269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 10:01:45.198233  630269 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 10:01:45.198292  630269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 10:01:45.233957  630269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:01:45.242873  630269 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:01:45.247038  630269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:01:45.281334  630269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:01:45.325481  630269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:01:45.365221  630269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:01:45.401797  630269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:01:45.438910  630269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:01:45.474313  630269 kubeadm.go:401] StartCluster: {Name:newest-cni-783113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-783113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:01:45.474443  630269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:01:45.474510  630269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:01:45.505308  630269 cri.go:89] found id: ""
	I1115 10:01:45.505380  630269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:01:45.516488  630269 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:01:45.516516  630269 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:01:45.516575  630269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:01:45.527652  630269 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:01:45.529219  630269 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-783113" does not appear in /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:01:45.530160  630269 kubeconfig.go:62] /home/jenkins/minikube-integration/21895-355485/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-783113" cluster setting kubeconfig missing "newest-cni-783113" context setting]
	I1115 10:01:45.531415  630269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:45.533838  630269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:01:45.544464  630269 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1115 10:01:45.544505  630269 kubeadm.go:602] duration metric: took 27.980331ms to restartPrimaryControlPlane
	I1115 10:01:45.544530  630269 kubeadm.go:403] duration metric: took 70.23014ms to StartCluster
	I1115 10:01:45.544548  630269 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:45.544625  630269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:01:45.547151  630269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:45.547516  630269 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:01:45.547665  630269 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:01:45.547761  630269 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-783113"
	I1115 10:01:45.547790  630269 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-783113"
	W1115 10:01:45.547802  630269 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:01:45.547832  630269 host.go:66] Checking if "newest-cni-783113" exists ...
	I1115 10:01:45.547841  630269 addons.go:70] Setting dashboard=true in profile "newest-cni-783113"
	I1115 10:01:45.547869  630269 addons.go:239] Setting addon dashboard=true in "newest-cni-783113"
	I1115 10:01:45.547732  630269 config.go:182] Loaded profile config "newest-cni-783113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:45.547920  630269 addons.go:70] Setting default-storageclass=true in profile "newest-cni-783113"
	I1115 10:01:45.547942  630269 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-783113"
	W1115 10:01:45.547878  630269 addons.go:248] addon dashboard should already be in state true
	I1115 10:01:45.547983  630269 host.go:66] Checking if "newest-cni-783113" exists ...
	I1115 10:01:45.548268  630269 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:45.548385  630269 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:45.548487  630269 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:45.552224  630269 out.go:179] * Verifying Kubernetes components...
	I1115 10:01:45.553726  630269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:45.574524  630269 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:01:45.574532  630269 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:01:45.574958  630269 addons.go:239] Setting addon default-storageclass=true in "newest-cni-783113"
	W1115 10:01:45.574984  630269 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:01:45.575014  630269 host.go:66] Checking if "newest-cni-783113" exists ...
	I1115 10:01:45.575500  630269 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:45.579622  630269 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:01:45.579647  630269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:01:45.579705  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:45.581119  630269 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:01:45.582370  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:01:45.582405  630269 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:01:45.582481  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:45.607283  630269 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:01:45.607307  630269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:01:45.607379  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:45.609766  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:45.613177  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:45.630051  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:45.677742  630269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:01:45.694351  630269 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:01:45.694440  630269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:01:45.716509  630269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:01:45.720566  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:01:45.720592  630269 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:01:45.734188  630269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:01:45.747440  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:01:45.747469  630269 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:01:45.769226  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:01:45.769261  630269 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:01:45.790228  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:01:45.790257  630269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1115 10:01:45.806813  630269 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1115 10:01:45.806868  630269 retry.go:31] will retry after 181.641891ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1115 10:01:45.807318  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:01:45.807335  630269 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:01:45.822370  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:01:45.822441  630269 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:01:45.836838  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:01:45.836872  630269 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:01:45.853850  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:01:45.853877  630269 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:01:45.866994  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:01:45.867019  630269 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:01:45.879306  630269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:01:45.989739  630269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:01:46.194770  630269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1115 10:01:42.033294  622837 node_ready.go:57] node "auto-034018" has "Ready":"False" status (will retry)
	W1115 10:01:44.533606  622837 node_ready.go:57] node "auto-034018" has "Ready":"False" status (will retry)
	I1115 10:01:47.496760  630269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.762526029s)
	I1115 10:01:47.895897  630269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.01653847s)
	I1115 10:01:47.897073  630269 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-783113 addons enable metrics-server
	
	I1115 10:01:48.013438  630269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.023657304s)
	I1115 10:01:48.013568  630269 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.818758679s)
	I1115 10:01:48.013603  630269 api_server.go:72] duration metric: took 2.466050065s to wait for apiserver process to appear ...
	I1115 10:01:48.013613  630269 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:01:48.013636  630269 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:01:48.015318  630269 out.go:179] * Enabled addons: default-storageclass, dashboard, storage-provisioner
	I1115 10:01:48.016446  630269 addons.go:515] duration metric: took 2.468793567s for enable addons: enabled=[default-storageclass dashboard storage-provisioner]
	I1115 10:01:48.018479  630269 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:01:48.018511  630269 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:01:48.514114  630269 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:01:48.519405  630269 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:01:48.519433  630269 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:01:49.014040  630269 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:01:49.019477  630269 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:01:49.020562  630269 api_server.go:141] control plane version: v1.34.1
	I1115 10:01:49.020590  630269 api_server.go:131] duration metric: took 1.006968992s to wait for apiserver health ...
	I1115 10:01:49.020602  630269 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:01:49.023944  630269 system_pods.go:59] 8 kube-system pods found
	I1115 10:01:49.023980  630269 system_pods.go:61] "coredns-66bc5c9577-87x7w" [3f2d84f5-7f97-4a19-b552-0575a9ceb536] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:01:49.023988  630269 system_pods.go:61] "etcd-newest-cni-783113" [2ea0aa42-7852-499c-8e8e-c5e1cfeb5707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:01:49.023994  630269 system_pods.go:61] "kindnet-zjdf2" [f7a3d406-4576-45ea-a09e-00df6579f9df] Running
	I1115 10:01:49.024000  630269 system_pods.go:61] "kube-apiserver-newest-cni-783113" [2313995d-c79b-4e18-8b97-3463f3d95a8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:01:49.024005  630269 system_pods.go:61] "kube-controller-manager-newest-cni-783113" [d3439ed1-3ef3-4865-9ff8-42c82ac3cfc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:01:49.024014  630269 system_pods.go:61] "kube-proxy-bqp7j" [19ca680a-9bd3-4943-842b-7ef042aa6e0e] Running
	I1115 10:01:49.024021  630269 system_pods.go:61] "kube-scheduler-newest-cni-783113" [8feea409-ed92-4a4d-8df7-39898903b818] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:01:49.024028  630269 system_pods.go:61] "storage-provisioner" [830eb5ed-8939-4ca1-a08d-440456d95a53] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:01:49.024035  630269 system_pods.go:74] duration metric: took 3.425902ms to wait for pod list to return data ...
	I1115 10:01:49.024044  630269 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:01:49.026716  630269 default_sa.go:45] found service account: "default"
	I1115 10:01:49.026747  630269 default_sa.go:55] duration metric: took 2.68713ms for default service account to be created ...
	I1115 10:01:49.026763  630269 kubeadm.go:587] duration metric: took 3.479209382s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:01:49.026786  630269 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:01:49.029365  630269 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:01:49.029428  630269 node_conditions.go:123] node cpu capacity is 8
	I1115 10:01:49.029467  630269 node_conditions.go:105] duration metric: took 2.656322ms to run NodePressure ...
	I1115 10:01:49.029489  630269 start.go:242] waiting for startup goroutines ...
	I1115 10:01:49.029502  630269 start.go:247] waiting for cluster config update ...
	I1115 10:01:49.029517  630269 start.go:256] writing updated cluster config ...
	I1115 10:01:49.029853  630269 ssh_runner.go:195] Run: rm -f paused
	I1115 10:01:49.083359  630269 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:01:49.085448  630269 out.go:179] * Done! kubectl is now configured to use "newest-cni-783113" cluster and "default" namespace by default
	W1115 10:01:47.343136  625726 pod_ready.go:104] pod "coredns-66bc5c9577-6gvgh" is not "Ready", error: <nil>
	W1115 10:01:49.843498  625726 pod_ready.go:104] pod "coredns-66bc5c9577-6gvgh" is not "Ready", error: <nil>
	W1115 10:01:47.033082  622837 node_ready.go:57] node "auto-034018" has "Ready":"False" status (will retry)
	I1115 10:01:49.032793  622837 node_ready.go:49] node "auto-034018" is "Ready"
	I1115 10:01:49.032825  622837 node_ready.go:38] duration metric: took 11.003439671s for node "auto-034018" to be "Ready" ...
	I1115 10:01:49.032844  622837 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:01:49.032907  622837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:01:49.045496  622837 api_server.go:72] duration metric: took 11.322361735s to wait for apiserver process to appear ...
	I1115 10:01:49.045528  622837 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:01:49.045553  622837 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1115 10:01:49.050595  622837 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1115 10:01:49.051757  622837 api_server.go:141] control plane version: v1.34.1
	I1115 10:01:49.051786  622837 api_server.go:131] duration metric: took 6.250572ms to wait for apiserver health ...
	I1115 10:01:49.051798  622837 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:01:49.055597  622837 system_pods.go:59] 8 kube-system pods found
	I1115 10:01:49.055642  622837 system_pods.go:61] "coredns-66bc5c9577-gxsbr" [2791d34a-f12f-405e-bf11-ca857ff63259] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:01:49.055660  622837 system_pods.go:61] "etcd-auto-034018" [4287824b-56ef-4250-8b4d-a5cde713cad1] Running
	I1115 10:01:49.055668  622837 system_pods.go:61] "kindnet-jbw6d" [60746de5-a450-42ec-8dba-cccdc2536e86] Running
	I1115 10:01:49.055673  622837 system_pods.go:61] "kube-apiserver-auto-034018" [e921a2a2-a70a-45f8-b30d-3b803a856590] Running
	I1115 10:01:49.055682  622837 system_pods.go:61] "kube-controller-manager-auto-034018" [b1910171-7727-4a36-a9b3-5569e82cddd5] Running
	I1115 10:01:49.055691  622837 system_pods.go:61] "kube-proxy-9pmmv" [b8ad36bf-b68c-49ec-89ce-f1a27d8c6971] Running
	I1115 10:01:49.055696  622837 system_pods.go:61] "kube-scheduler-auto-034018" [aeec3869-903f-49f7-b392-7ea75c0e6fb9] Running
	I1115 10:01:49.055706  622837 system_pods.go:61] "storage-provisioner" [908d198a-7280-4d12-9019-cc8d4dc78821] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:01:49.055717  622837 system_pods.go:74] duration metric: took 3.911203ms to wait for pod list to return data ...
	I1115 10:01:49.055731  622837 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:01:49.058224  622837 default_sa.go:45] found service account: "default"
	I1115 10:01:49.058245  622837 default_sa.go:55] duration metric: took 2.504349ms for default service account to be created ...
	I1115 10:01:49.058256  622837 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:01:49.061388  622837 system_pods.go:86] 8 kube-system pods found
	I1115 10:01:49.061441  622837 system_pods.go:89] "coredns-66bc5c9577-gxsbr" [2791d34a-f12f-405e-bf11-ca857ff63259] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:01:49.061446  622837 system_pods.go:89] "etcd-auto-034018" [4287824b-56ef-4250-8b4d-a5cde713cad1] Running
	I1115 10:01:49.061456  622837 system_pods.go:89] "kindnet-jbw6d" [60746de5-a450-42ec-8dba-cccdc2536e86] Running
	I1115 10:01:49.061459  622837 system_pods.go:89] "kube-apiserver-auto-034018" [e921a2a2-a70a-45f8-b30d-3b803a856590] Running
	I1115 10:01:49.061470  622837 system_pods.go:89] "kube-controller-manager-auto-034018" [b1910171-7727-4a36-a9b3-5569e82cddd5] Running
	I1115 10:01:49.061475  622837 system_pods.go:89] "kube-proxy-9pmmv" [b8ad36bf-b68c-49ec-89ce-f1a27d8c6971] Running
	I1115 10:01:49.061479  622837 system_pods.go:89] "kube-scheduler-auto-034018" [aeec3869-903f-49f7-b392-7ea75c0e6fb9] Running
	I1115 10:01:49.061486  622837 system_pods.go:89] "storage-provisioner" [908d198a-7280-4d12-9019-cc8d4dc78821] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:01:49.061528  622837 retry.go:31] will retry after 312.347036ms: missing components: kube-dns
	I1115 10:01:49.382127  622837 system_pods.go:86] 8 kube-system pods found
	I1115 10:01:49.382188  622837 system_pods.go:89] "coredns-66bc5c9577-gxsbr" [2791d34a-f12f-405e-bf11-ca857ff63259] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:01:49.382197  622837 system_pods.go:89] "etcd-auto-034018" [4287824b-56ef-4250-8b4d-a5cde713cad1] Running
	I1115 10:01:49.382212  622837 system_pods.go:89] "kindnet-jbw6d" [60746de5-a450-42ec-8dba-cccdc2536e86] Running
	I1115 10:01:49.382226  622837 system_pods.go:89] "kube-apiserver-auto-034018" [e921a2a2-a70a-45f8-b30d-3b803a856590] Running
	I1115 10:01:49.382233  622837 system_pods.go:89] "kube-controller-manager-auto-034018" [b1910171-7727-4a36-a9b3-5569e82cddd5] Running
	I1115 10:01:49.382242  622837 system_pods.go:89] "kube-proxy-9pmmv" [b8ad36bf-b68c-49ec-89ce-f1a27d8c6971] Running
	I1115 10:01:49.382248  622837 system_pods.go:89] "kube-scheduler-auto-034018" [aeec3869-903f-49f7-b392-7ea75c0e6fb9] Running
	I1115 10:01:49.382259  622837 system_pods.go:89] "storage-provisioner" [908d198a-7280-4d12-9019-cc8d4dc78821] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:01:49.382283  622837 retry.go:31] will retry after 383.502025ms: missing components: kube-dns
	I1115 10:01:49.771364  622837 system_pods.go:86] 8 kube-system pods found
	I1115 10:01:49.771442  622837 system_pods.go:89] "coredns-66bc5c9577-gxsbr" [2791d34a-f12f-405e-bf11-ca857ff63259] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:01:49.771451  622837 system_pods.go:89] "etcd-auto-034018" [4287824b-56ef-4250-8b4d-a5cde713cad1] Running
	I1115 10:01:49.771460  622837 system_pods.go:89] "kindnet-jbw6d" [60746de5-a450-42ec-8dba-cccdc2536e86] Running
	I1115 10:01:49.771466  622837 system_pods.go:89] "kube-apiserver-auto-034018" [e921a2a2-a70a-45f8-b30d-3b803a856590] Running
	I1115 10:01:49.771475  622837 system_pods.go:89] "kube-controller-manager-auto-034018" [b1910171-7727-4a36-a9b3-5569e82cddd5] Running
	I1115 10:01:49.771481  622837 system_pods.go:89] "kube-proxy-9pmmv" [b8ad36bf-b68c-49ec-89ce-f1a27d8c6971] Running
	I1115 10:01:49.771486  622837 system_pods.go:89] "kube-scheduler-auto-034018" [aeec3869-903f-49f7-b392-7ea75c0e6fb9] Running
	I1115 10:01:49.771497  622837 system_pods.go:89] "storage-provisioner" [908d198a-7280-4d12-9019-cc8d4dc78821] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:01:49.771530  622837 retry.go:31] will retry after 425.951583ms: missing components: kube-dns
	I1115 10:01:50.201588  622837 system_pods.go:86] 8 kube-system pods found
	I1115 10:01:50.201631  622837 system_pods.go:89] "coredns-66bc5c9577-gxsbr" [2791d34a-f12f-405e-bf11-ca857ff63259] Running
	I1115 10:01:50.201639  622837 system_pods.go:89] "etcd-auto-034018" [4287824b-56ef-4250-8b4d-a5cde713cad1] Running
	I1115 10:01:50.201645  622837 system_pods.go:89] "kindnet-jbw6d" [60746de5-a450-42ec-8dba-cccdc2536e86] Running
	I1115 10:01:50.201650  622837 system_pods.go:89] "kube-apiserver-auto-034018" [e921a2a2-a70a-45f8-b30d-3b803a856590] Running
	I1115 10:01:50.201655  622837 system_pods.go:89] "kube-controller-manager-auto-034018" [b1910171-7727-4a36-a9b3-5569e82cddd5] Running
	I1115 10:01:50.201662  622837 system_pods.go:89] "kube-proxy-9pmmv" [b8ad36bf-b68c-49ec-89ce-f1a27d8c6971] Running
	I1115 10:01:50.201669  622837 system_pods.go:89] "kube-scheduler-auto-034018" [aeec3869-903f-49f7-b392-7ea75c0e6fb9] Running
	I1115 10:01:50.201684  622837 system_pods.go:89] "storage-provisioner" [908d198a-7280-4d12-9019-cc8d4dc78821] Running
	I1115 10:01:50.201697  622837 system_pods.go:126] duration metric: took 1.143432108s to wait for k8s-apps to be running ...
	I1115 10:01:50.201711  622837 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:01:50.201765  622837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:01:50.215182  622837 system_svc.go:56] duration metric: took 13.458446ms WaitForService to wait for kubelet
	I1115 10:01:50.215216  622837 kubeadm.go:587] duration metric: took 12.492092029s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:01:50.215238  622837 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:01:50.218199  622837 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:01:50.218226  622837 node_conditions.go:123] node cpu capacity is 8
	I1115 10:01:50.218239  622837 node_conditions.go:105] duration metric: took 2.997277ms to run NodePressure ...
	I1115 10:01:50.218253  622837 start.go:242] waiting for startup goroutines ...
	I1115 10:01:50.218264  622837 start.go:247] waiting for cluster config update ...
	I1115 10:01:50.218277  622837 start.go:256] writing updated cluster config ...
	I1115 10:01:50.218619  622837 ssh_runner.go:195] Run: rm -f paused
	I1115 10:01:50.222554  622837 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:01:50.226291  622837 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gxsbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.230113  622837 pod_ready.go:94] pod "coredns-66bc5c9577-gxsbr" is "Ready"
	I1115 10:01:50.230132  622837 pod_ready.go:86] duration metric: took 3.812766ms for pod "coredns-66bc5c9577-gxsbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.231926  622837 pod_ready.go:83] waiting for pod "etcd-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.235432  622837 pod_ready.go:94] pod "etcd-auto-034018" is "Ready"
	I1115 10:01:50.235453  622837 pod_ready.go:86] duration metric: took 3.506292ms for pod "etcd-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.237129  622837 pod_ready.go:83] waiting for pod "kube-apiserver-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.240800  622837 pod_ready.go:94] pod "kube-apiserver-auto-034018" is "Ready"
	I1115 10:01:50.240817  622837 pod_ready.go:86] duration metric: took 3.670017ms for pod "kube-apiserver-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.242595  622837 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.627274  622837 pod_ready.go:94] pod "kube-controller-manager-auto-034018" is "Ready"
	I1115 10:01:50.627308  622837 pod_ready.go:86] duration metric: took 384.693592ms for pod "kube-controller-manager-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.827881  622837 pod_ready.go:83] waiting for pod "kube-proxy-9pmmv" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:51.226782  622837 pod_ready.go:94] pod "kube-proxy-9pmmv" is "Ready"
	I1115 10:01:51.226810  622837 pod_ready.go:86] duration metric: took 398.903606ms for pod "kube-proxy-9pmmv" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:51.427366  622837 pod_ready.go:83] waiting for pod "kube-scheduler-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:51.826728  622837 pod_ready.go:94] pod "kube-scheduler-auto-034018" is "Ready"
	I1115 10:01:51.826767  622837 pod_ready.go:86] duration metric: took 399.351016ms for pod "kube-scheduler-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:51.826796  622837 pod_ready.go:40] duration metric: took 1.604201385s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:01:51.878880  622837 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:01:51.880846  622837 out.go:179] * Done! kubectl is now configured to use "auto-034018" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.188741113Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.192814405Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=90d9054b-1ee1-42df-8ee7-a29ce9c98d12 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.193416308Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6fc7b2e6-1b4d-48cb-85c1-8deaf35da78e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.19447033Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.194913475Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.195198028Z" level=info msg="Ran pod sandbox 00e7796884f8e68d27c4039b92cbb9742d0b4bac7a43171fbe4ff26b58a8d621 with infra container: kube-system/kindnet-zjdf2/POD" id=90d9054b-1ee1-42df-8ee7-a29ce9c98d12 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.195563471Z" level=info msg="Ran pod sandbox e2fcee80c030b50ce74aeaa547cda9536e7bab229d55de0fbe62e51639f20a5b with infra container: kube-system/kube-proxy-bqp7j/POD" id=6fc7b2e6-1b4d-48cb-85c1-8deaf35da78e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.19636922Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=db31bba0-1873-4135-a84d-4f1e0190840c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.196627374Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d704c136-cf5a-4563-bb0a-679d6a870486 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.197283729Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9d892a21-06ef-4383-9e70-6424064f18d5 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.197532302Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=17340a4a-d1b6-4121-8f22-10307a5cb7bd name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.198431743Z" level=info msg="Creating container: kube-system/kindnet-zjdf2/kindnet-cni" id=1dbce062-e56d-40f9-99c0-9d568086cdc5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.198520311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.198431746Z" level=info msg="Creating container: kube-system/kube-proxy-bqp7j/kube-proxy" id=b2dc1601-ebea-4517-9366-8269f69c1fa8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.198668827Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.205534575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.206026841Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.207979604Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.208458747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.235407253Z" level=info msg="Created container 177965edad35ab7bc4ac03ef33d5c8ac0548da2de0546df9d0b1167b6823c792: kube-system/kindnet-zjdf2/kindnet-cni" id=1dbce062-e56d-40f9-99c0-9d568086cdc5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.235992478Z" level=info msg="Starting container: 177965edad35ab7bc4ac03ef33d5c8ac0548da2de0546df9d0b1167b6823c792" id=555e2bde-8b1b-4c61-bd3d-d188bcd0d95a name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.237842741Z" level=info msg="Started container" PID=1044 containerID=177965edad35ab7bc4ac03ef33d5c8ac0548da2de0546df9d0b1167b6823c792 description=kube-system/kindnet-zjdf2/kindnet-cni id=555e2bde-8b1b-4c61-bd3d-d188bcd0d95a name=/runtime.v1.RuntimeService/StartContainer sandboxID=00e7796884f8e68d27c4039b92cbb9742d0b4bac7a43171fbe4ff26b58a8d621
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.23878445Z" level=info msg="Created container 3ad1b9ceb1dbf75e014776fa482c1eba37c87d155fc5b52311a23c67ad452966: kube-system/kube-proxy-bqp7j/kube-proxy" id=b2dc1601-ebea-4517-9366-8269f69c1fa8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.239576716Z" level=info msg="Starting container: 3ad1b9ceb1dbf75e014776fa482c1eba37c87d155fc5b52311a23c67ad452966" id=b162d96b-00f3-4523-89c9-866cbd77bb79 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.242667686Z" level=info msg="Started container" PID=1045 containerID=3ad1b9ceb1dbf75e014776fa482c1eba37c87d155fc5b52311a23c67ad452966 description=kube-system/kube-proxy-bqp7j/kube-proxy id=b162d96b-00f3-4523-89c9-866cbd77bb79 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e2fcee80c030b50ce74aeaa547cda9536e7bab229d55de0fbe62e51639f20a5b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3ad1b9ceb1dbf       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   e2fcee80c030b       kube-proxy-bqp7j                            kube-system
	177965edad35a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   00e7796884f8e       kindnet-zjdf2                               kube-system
	b347dba9b065d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   4572ad29702e1       kube-apiserver-newest-cni-783113            kube-system
	9409cc92c0e96       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   0d9b202b03e1d       kube-scheduler-newest-cni-783113            kube-system
	5f919a2e9786b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   206c3b2d011a7       etcd-newest-cni-783113                      kube-system
	85cc4b53b2889       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   e3661a25eb63a       kube-controller-manager-newest-cni-783113   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-783113
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-783113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=newest-cni-783113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_01_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:01:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-783113
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:01:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:01:47 +0000   Sat, 15 Nov 2025 10:01:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:01:47 +0000   Sat, 15 Nov 2025 10:01:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:01:47 +0000   Sat, 15 Nov 2025 10:01:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 15 Nov 2025 10:01:47 +0000   Sat, 15 Nov 2025 10:01:16 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-783113
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                d180c89e-341a-4dbc-bc47-54c5b0042756
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-783113                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-zjdf2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-783113             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-783113    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-bqp7j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-783113             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node newest-cni-783113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node newest-cni-783113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x8 over 39s)  kubelet          Node newest-cni-783113 status is now: NodeHasSufficientPID
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-783113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node newest-cni-783113 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node newest-cni-783113 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           28s                node-controller  Node newest-cni-783113 event: Registered Node newest-cni-783113 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 8s)    kubelet          Node newest-cni-783113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 8s)    kubelet          Node newest-cni-783113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x8 over 8s)    kubelet          Node newest-cni-783113 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-783113 event: Registered Node newest-cni-783113 in Controller
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [5f919a2e9786b1d58ad021f0e0907f1c99dc24c7a50298e330d71f4da52c9e03] <==
	{"level":"warn","ts":"2025-11-15T10:01:46.759730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.765939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.774869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.781107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.787520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.793797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.805990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.813428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.820675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.828531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.834821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.841832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.849044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.856269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.862435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.868762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.876202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.882770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.890539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.900250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.906783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.927962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.935298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.943104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.983580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41884","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:01:52 up  1:44,  0 user,  load average: 5.28, 3.33, 2.09
	Linux newest-cni-783113 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [177965edad35ab7bc4ac03ef33d5c8ac0548da2de0546df9d0b1167b6823c792] <==
	I1115 10:01:48.434063       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:01:48.434333       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1115 10:01:48.434483       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:01:48.434504       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:01:48.434528       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:01:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:01:48.725274       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:01:48.725308       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:01:48.725322       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:01:48.725481       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:01:49.125526       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:01:49.125568       1 metrics.go:72] Registering metrics
	I1115 10:01:49.125682       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [b347dba9b065dbc9ab312f9e85bb5958e47274c599716dc75f0de2924b9e3277] <==
	I1115 10:01:47.471401       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:01:47.471912       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 10:01:47.471977       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:01:47.472079       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:01:47.472115       1 policy_source.go:240] refreshing policies
	I1115 10:01:47.472210       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 10:01:47.472289       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:01:47.472942       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:01:47.472957       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:01:47.478984       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:01:47.493863       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:01:47.511156       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:01:47.550249       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:01:47.785628       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:01:47.814511       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:01:47.833154       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:01:47.841649       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:01:47.848533       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:01:47.880991       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.242.104"}
	I1115 10:01:47.890959       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.111.99"}
	I1115 10:01:48.374532       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:01:50.844958       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:01:51.193900       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:01:51.443892       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:01:51.443892       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [85cc4b53b288933ecd9863c2e7cd92befe5f1dffe99dfce282a0efb376cc5e26] <==
	I1115 10:01:50.800378       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:01:50.804637       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:01:50.804705       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:01:50.807235       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:01:50.814548       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-783113"
	I1115 10:01:50.814692       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 10:01:50.840387       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:01:50.840480       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:01:50.840480       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:01:50.840654       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:01:50.840702       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:01:50.840927       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:01:50.841044       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:01:50.841058       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:01:50.841146       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:01:50.841254       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:01:50.842538       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 10:01:50.846213       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:01:50.846258       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:01:50.846377       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:01:50.849634       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:01:50.849642       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:01:50.854739       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:01:50.856998       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:01:50.857969       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [3ad1b9ceb1dbf75e014776fa482c1eba37c87d155fc5b52311a23c67ad452966] <==
	I1115 10:01:48.276180       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:01:48.346266       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:01:48.446656       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:01:48.446716       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1115 10:01:48.446812       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:01:48.469772       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:01:48.469839       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:01:48.475797       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:01:48.476238       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:01:48.476268       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:01:48.479862       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:01:48.479882       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:01:48.479897       1 config.go:309] "Starting node config controller"
	I1115 10:01:48.479905       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:01:48.479910       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:01:48.479912       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:01:48.479917       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:01:48.479900       1 config.go:200] "Starting service config controller"
	I1115 10:01:48.479926       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:01:48.581038       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:01:48.581838       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:01:48.581850       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9409cc92c0e96c6895a87fb31f50ae5a740a26c9e4370bfc6e46f8f7dd07e7a7] <==
	I1115 10:01:46.788741       1 serving.go:386] Generated self-signed cert in-memory
	I1115 10:01:48.018903       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:01:48.018930       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:01:48.023202       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:01:48.023202       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 10:01:48.023237       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:01:48.023250       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:01:48.023263       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:01:48.023252       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 10:01:48.024079       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:01:48.024246       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:01:48.124411       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 10:01:48.124460       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:01:48.124427       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:01:46 newest-cni-783113 kubelet[663]: E1115 10:01:46.924189     663 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-783113\" not found" node="newest-cni-783113"
	Nov 15 10:01:46 newest-cni-783113 kubelet[663]: E1115 10:01:46.924308     663 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-783113\" not found" node="newest-cni-783113"
	Nov 15 10:01:46 newest-cni-783113 kubelet[663]: E1115 10:01:46.924512     663 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-783113\" not found" node="newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.485784     663 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: E1115 10:01:47.498008     663 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-783113\" already exists" pod="kube-system/kube-controller-manager-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.498177     663 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: E1115 10:01:47.505825     663 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-783113\" already exists" pod="kube-system/kube-scheduler-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.505866     663 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: E1115 10:01:47.511599     663 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-783113\" already exists" pod="kube-system/etcd-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.511631     663 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.519266     663 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: E1115 10:01:47.519472     663 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-783113\" already exists" pod="kube-system/kube-apiserver-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.519516     663 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.519559     663 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.520497     663 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.880384     663 apiserver.go:52] "Watching apiserver"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.885587     663 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.929644     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7a3d406-4576-45ea-a09e-00df6579f9df-xtables-lock\") pod \"kindnet-zjdf2\" (UID: \"f7a3d406-4576-45ea-a09e-00df6579f9df\") " pod="kube-system/kindnet-zjdf2"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.929689     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7a3d406-4576-45ea-a09e-00df6579f9df-lib-modules\") pod \"kindnet-zjdf2\" (UID: \"f7a3d406-4576-45ea-a09e-00df6579f9df\") " pod="kube-system/kindnet-zjdf2"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.929795     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19ca680a-9bd3-4943-842b-7ef042aa6e0e-xtables-lock\") pod \"kube-proxy-bqp7j\" (UID: \"19ca680a-9bd3-4943-842b-7ef042aa6e0e\") " pod="kube-system/kube-proxy-bqp7j"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.929864     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19ca680a-9bd3-4943-842b-7ef042aa6e0e-lib-modules\") pod \"kube-proxy-bqp7j\" (UID: \"19ca680a-9bd3-4943-842b-7ef042aa6e0e\") " pod="kube-system/kube-proxy-bqp7j"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.929918     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f7a3d406-4576-45ea-a09e-00df6579f9df-cni-cfg\") pod \"kindnet-zjdf2\" (UID: \"f7a3d406-4576-45ea-a09e-00df6579f9df\") " pod="kube-system/kindnet-zjdf2"
	Nov 15 10:01:50 newest-cni-783113 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:01:50 newest-cni-783113 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:01:50 newest-cni-783113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-783113 -n newest-cni-783113
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-783113 -n newest-cni-783113: exit status 2 (407.451354ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-783113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-87x7w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-9dhx4 kubernetes-dashboard-855c9754f9-l6h4l
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-783113 describe pod coredns-66bc5c9577-87x7w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-9dhx4 kubernetes-dashboard-855c9754f9-l6h4l
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-783113 describe pod coredns-66bc5c9577-87x7w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-9dhx4 kubernetes-dashboard-855c9754f9-l6h4l: exit status 1 (94.856165ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-87x7w" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-9dhx4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-l6h4l" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-783113 describe pod coredns-66bc5c9577-87x7w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-9dhx4 kubernetes-dashboard-855c9754f9-l6h4l: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-783113
helpers_test.go:243: (dbg) docker inspect newest-cni-783113:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940",
	        "Created": "2025-11-15T10:01:00.281154454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 630484,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:01:36.825179649Z",
	            "FinishedAt": "2025-11-15T10:01:35.884417594Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940/hostname",
	        "HostsPath": "/var/lib/docker/containers/0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940/hosts",
	        "LogPath": "/var/lib/docker/containers/0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940/0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940-json.log",
	        "Name": "/newest-cni-783113",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-783113:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-783113",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0ac6b2197ead325140230d93cdaa6a0d542c53782b5b23e5fe47564596b9b940",
	                "LowerDir": "/var/lib/docker/overlay2/adf1e197b96e4bdc3adefbdfad4bf35a60d874784fe2ff099ee9fda65e08bccc-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/adf1e197b96e4bdc3adefbdfad4bf35a60d874784fe2ff099ee9fda65e08bccc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/adf1e197b96e4bdc3adefbdfad4bf35a60d874784fe2ff099ee9fda65e08bccc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/adf1e197b96e4bdc3adefbdfad4bf35a60d874784fe2ff099ee9fda65e08bccc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-783113",
	                "Source": "/var/lib/docker/volumes/newest-cni-783113/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-783113",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-783113",
	                "name.minikube.sigs.k8s.io": "newest-cni-783113",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6c8b543d3a43190d8c7c440ebcebc1986eb3bc50ea35cd29673f75594c094431",
	            "SandboxKey": "/var/run/docker/netns/6c8b543d3a43",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-783113": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5154d9a0ce32378165efc274699868177016a3c20c41bacb01c1c35fc0b5949c",
	                    "EndpointID": "905c514275da5be7629c1b09804a3be8b657da653b732fb1e95c62c3da0a95d1",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "22:7f:29:19:4a:b2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-783113",
	                        "0ac6b2197ead"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-783113 -n newest-cni-783113
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-783113 -n newest-cni-783113: exit status 2 (380.826444ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-783113 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-783113 logs -n 25: (1.303012336s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-335655                                                                                                                                                                                                                     │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p old-k8s-version-335655                                                                                                                                                                                                                     │ old-k8s-version-335655       │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p disable-driver-mounts-553319                                                                                                                                                                                                               │ disable-driver-mounts-553319 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p default-k8s-diff-port-679865 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:01 UTC │
	│ image   │ no-preload-559401 image list --format=json                                                                                                                                                                                                    │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ pause   │ -p no-preload-559401 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │                     │
	│ delete  │ -p no-preload-559401                                                                                                                                                                                                                          │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ delete  │ -p no-preload-559401                                                                                                                                                                                                                          │ no-preload-559401            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:00 UTC │
	│ start   │ -p newest-cni-783113 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:00 UTC │ 15 Nov 25 10:01 UTC │
	│ start   │ -p cert-expiration-341243 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-341243       │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-430513 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ stop    │ -p embed-certs-430513 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ delete  │ -p cert-expiration-341243                                                                                                                                                                                                                     │ cert-expiration-341243       │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ start   │ -p auto-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-430513 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ start   │ -p embed-certs-430513 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-783113 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ stop    │ -p newest-cni-783113 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-679865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-679865 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-783113 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ start   │ -p newest-cni-783113 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ image   │ newest-cni-783113 image list --format=json                                                                                                                                                                                                    │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	│ pause   │ -p newest-cni-783113 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-783113            │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │                     │
	│ ssh     │ -p auto-034018 pgrep -a kubelet                                                                                                                                                                                                               │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:01 UTC │ 15 Nov 25 10:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:01:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:01:36.569942  630269 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:01:36.570076  630269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:01:36.570087  630269 out.go:374] Setting ErrFile to fd 2...
	I1115 10:01:36.570091  630269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:01:36.570283  630269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:01:36.570795  630269 out.go:368] Setting JSON to false
	I1115 10:01:36.571916  630269 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6238,"bootTime":1763194659,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:01:36.572027  630269 start.go:143] virtualization: kvm guest
	I1115 10:01:36.573679  630269 out.go:179] * [newest-cni-783113] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:01:36.574734  630269 notify.go:221] Checking for updates...
	I1115 10:01:36.574784  630269 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:01:36.575817  630269 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:01:36.577012  630269 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:01:36.578405  630269 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 10:01:36.579522  630269 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:01:36.580675  630269 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:01:36.582157  630269 config.go:182] Loaded profile config "newest-cni-783113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:36.582710  630269 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:01:36.607471  630269 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:01:36.607574  630269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:01:36.670675  630269 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-15 10:01:36.658557671 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:01:36.670832  630269 docker.go:319] overlay module found
	I1115 10:01:36.672505  630269 out.go:179] * Using the docker driver based on existing profile
	I1115 10:01:32.665429  622837 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:01:32.670534  622837 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:01:32.670559  622837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:01:32.685153  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:01:33.043700  622837 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:01:33.043783  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:33.043877  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-034018 minikube.k8s.io/updated_at=2025_11_15T10_01_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=auto-034018 minikube.k8s.io/primary=true
	I1115 10:01:33.140260  622837 ops.go:34] apiserver oom_adj: -16
	I1115 10:01:33.140433  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:33.641139  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:34.141504  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:34.640510  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:35.141167  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:35.641547  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:36.141213  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:36.640578  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:36.673598  630269 start.go:309] selected driver: docker
	I1115 10:01:36.673617  630269 start.go:930] validating driver "docker" against &{Name:newest-cni-783113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-783113 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:01:36.673747  630269 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:01:36.674601  630269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:01:36.747670  630269 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-15 10:01:36.737376432 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:01:36.748046  630269 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:01:36.748079  630269 cni.go:84] Creating CNI manager for ""
	I1115 10:01:36.748145  630269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:01:36.748212  630269 start.go:353] cluster config:
	{Name:newest-cni-783113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-783113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:01:36.750705  630269 out.go:179] * Starting "newest-cni-783113" primary control-plane node in "newest-cni-783113" cluster
	I1115 10:01:36.751865  630269 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:01:36.753066  630269 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:01:36.754347  630269 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:01:36.754405  630269 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:01:36.754431  630269 cache.go:65] Caching tarball of preloaded images
	I1115 10:01:36.754452  630269 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:01:36.754572  630269 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:01:36.754589  630269 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:01:36.754709  630269 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/config.json ...
	I1115 10:01:36.777265  630269 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:01:36.777288  630269 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:01:36.777310  630269 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:01:36.777350  630269 start.go:360] acquireMachinesLock for newest-cni-783113: {Name:mkf30ab080def5f7c46d57225f0ee495d461161f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:01:36.777475  630269 start.go:364] duration metric: took 97.184µs to acquireMachinesLock for "newest-cni-783113"
	I1115 10:01:36.777511  630269 start.go:96] Skipping create...Using existing machine configuration
	I1115 10:01:36.777517  630269 fix.go:54] fixHost starting: 
	I1115 10:01:36.777740  630269 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:36.795826  630269 fix.go:112] recreateIfNeeded on newest-cni-783113: state=Stopped err=<nil>
	W1115 10:01:36.795905  630269 fix.go:138] unexpected machine state, will restart: <nil>
	I1115 10:01:37.140547  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:37.641198  622837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:01:37.720238  622837 kubeadm.go:1114] duration metric: took 4.676515687s to wait for elevateKubeSystemPrivileges
	I1115 10:01:37.720278  622837 kubeadm.go:403] duration metric: took 17.004691232s to StartCluster
	I1115 10:01:37.720303  622837 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:37.720386  622837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:01:37.722727  622837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:37.723087  622837 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:01:37.723142  622837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:01:37.723207  622837 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:01:37.723359  622837 addons.go:70] Setting storage-provisioner=true in profile "auto-034018"
	I1115 10:01:37.723372  622837 config.go:182] Loaded profile config "auto-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:37.723374  622837 addons.go:70] Setting default-storageclass=true in profile "auto-034018"
	I1115 10:01:37.723385  622837 addons.go:239] Setting addon storage-provisioner=true in "auto-034018"
	I1115 10:01:37.723441  622837 host.go:66] Checking if "auto-034018" exists ...
	I1115 10:01:37.723428  622837 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-034018"
	I1115 10:01:37.723844  622837 cli_runner.go:164] Run: docker container inspect auto-034018 --format={{.State.Status}}
	I1115 10:01:37.724007  622837 cli_runner.go:164] Run: docker container inspect auto-034018 --format={{.State.Status}}
	I1115 10:01:37.727654  622837 out.go:179] * Verifying Kubernetes components...
	I1115 10:01:37.728929  622837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:37.749210  622837 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:01:37.750128  622837 addons.go:239] Setting addon default-storageclass=true in "auto-034018"
	I1115 10:01:37.750181  622837 host.go:66] Checking if "auto-034018" exists ...
	I1115 10:01:37.750694  622837 cli_runner.go:164] Run: docker container inspect auto-034018 --format={{.State.Status}}
	I1115 10:01:37.751748  622837 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:01:37.751834  622837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:01:37.751919  622837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-034018
	I1115 10:01:37.787632  622837 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:01:37.787726  622837 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:01:37.787825  622837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-034018
	I1115 10:01:37.787899  622837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/auto-034018/id_rsa Username:docker}
	I1115 10:01:37.816385  622837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/auto-034018/id_rsa Username:docker}
	I1115 10:01:37.840217  622837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:01:37.888164  622837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:01:37.901792  622837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:01:37.934083  622837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:01:38.027888  622837 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1115 10:01:38.029330  622837 node_ready.go:35] waiting up to 15m0s for node "auto-034018" to be "Ready" ...
	I1115 10:01:38.216550  622837 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:01:35.312760  625726 addons.go:515] duration metric: took 2.588255477s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1115 10:01:35.793239  625726 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:01:35.799079  625726 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:01:35.799109  625726 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:01:36.292687  625726 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1115 10:01:36.297514  625726 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1115 10:01:36.298524  625726 api_server.go:141] control plane version: v1.34.1
	I1115 10:01:36.298558  625726 api_server.go:131] duration metric: took 1.006274865s to wait for apiserver health ...
	I1115 10:01:36.298569  625726 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:01:36.302680  625726 system_pods.go:59] 8 kube-system pods found
	I1115 10:01:36.302726  625726 system_pods.go:61] "coredns-66bc5c9577-6gvgh" [605418c0-0b25-478e-bc97-875523469f50] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:01:36.302738  625726 system_pods.go:61] "etcd-embed-certs-430513" [c811a4dd-480d-4848-8c3b-15a0518be2d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:01:36.302747  625726 system_pods.go:61] "kindnet-h26k6" [01c61aeb-fa93-4a50-b032-f52dbb9215a4] Running
	I1115 10:01:36.302756  625726 system_pods.go:61] "kube-apiserver-embed-certs-430513" [8bdbd8f0-db7a-429c-8046-a248edbe5e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:01:36.302763  625726 system_pods.go:61] "kube-controller-manager-embed-certs-430513" [78c3f3b5-1c2a-4af4-9e25-95f4bf9fe86a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:01:36.302770  625726 system_pods.go:61] "kube-proxy-kd7wd" [27ddf833-a045-40a5-9220-9cbae8dd4875] Running
	I1115 10:01:36.302778  625726 system_pods.go:61] "kube-scheduler-embed-certs-430513" [eef0520d-ea72-42ca-b035-13ebbfa74df0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:01:36.302783  625726 system_pods.go:61] "storage-provisioner" [a1e774e7-a59e-4d09-abca-2a71de44c919] Running
	I1115 10:01:36.302801  625726 system_pods.go:74] duration metric: took 4.216678ms to wait for pod list to return data ...
	I1115 10:01:36.302815  625726 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:01:36.305715  625726 default_sa.go:45] found service account: "default"
	I1115 10:01:36.305735  625726 default_sa.go:55] duration metric: took 2.912812ms for default service account to be created ...
	I1115 10:01:36.305742  625726 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:01:36.308894  625726 system_pods.go:86] 8 kube-system pods found
	I1115 10:01:36.308946  625726 system_pods.go:89] "coredns-66bc5c9577-6gvgh" [605418c0-0b25-478e-bc97-875523469f50] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:01:36.308958  625726 system_pods.go:89] "etcd-embed-certs-430513" [c811a4dd-480d-4848-8c3b-15a0518be2d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:01:36.308975  625726 system_pods.go:89] "kindnet-h26k6" [01c61aeb-fa93-4a50-b032-f52dbb9215a4] Running
	I1115 10:01:36.308986  625726 system_pods.go:89] "kube-apiserver-embed-certs-430513" [8bdbd8f0-db7a-429c-8046-a248edbe5e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:01:36.308999  625726 system_pods.go:89] "kube-controller-manager-embed-certs-430513" [78c3f3b5-1c2a-4af4-9e25-95f4bf9fe86a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:01:36.309009  625726 system_pods.go:89] "kube-proxy-kd7wd" [27ddf833-a045-40a5-9220-9cbae8dd4875] Running
	I1115 10:01:36.309022  625726 system_pods.go:89] "kube-scheduler-embed-certs-430513" [eef0520d-ea72-42ca-b035-13ebbfa74df0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:01:36.309030  625726 system_pods.go:89] "storage-provisioner" [a1e774e7-a59e-4d09-abca-2a71de44c919] Running
	I1115 10:01:36.309041  625726 system_pods.go:126] duration metric: took 3.292467ms to wait for k8s-apps to be running ...
	I1115 10:01:36.309052  625726 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:01:36.309101  625726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:01:36.325617  625726 system_svc.go:56] duration metric: took 16.554722ms WaitForService to wait for kubelet
	I1115 10:01:36.325648  625726 kubeadm.go:587] duration metric: took 3.6011784s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:01:36.325670  625726 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:01:36.328907  625726 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:01:36.328940  625726 node_conditions.go:123] node cpu capacity is 8
	I1115 10:01:36.328957  625726 node_conditions.go:105] duration metric: took 3.28114ms to run NodePressure ...
	I1115 10:01:36.328975  625726 start.go:242] waiting for startup goroutines ...
	I1115 10:01:36.329002  625726 start.go:247] waiting for cluster config update ...
	I1115 10:01:36.329021  625726 start.go:256] writing updated cluster config ...
	I1115 10:01:36.329322  625726 ssh_runner.go:195] Run: rm -f paused
	I1115 10:01:36.333314  625726 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:01:36.336969  625726 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6gvgh" in "kube-system" namespace to be "Ready" or be gone ...
	W1115 10:01:38.342670  625726 pod_ready.go:104] pod "coredns-66bc5c9577-6gvgh" is not "Ready", error: <nil>
	I1115 10:01:36.798075  630269 out.go:252] * Restarting existing docker container for "newest-cni-783113" ...
	I1115 10:01:36.798171  630269 cli_runner.go:164] Run: docker start newest-cni-783113
	I1115 10:01:37.111970  630269 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:37.131388  630269 kic.go:430] container "newest-cni-783113" state is running.
	I1115 10:01:37.131919  630269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-783113
	I1115 10:01:37.153793  630269 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/config.json ...
	I1115 10:01:37.154102  630269 machine.go:94] provisionDockerMachine start ...
	I1115 10:01:37.154181  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:37.178043  630269 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:37.178383  630269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1115 10:01:37.178408  630269 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:01:37.179167  630269 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38172->127.0.0.1:33474: read: connection reset by peer
	I1115 10:01:40.324578  630269 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-783113
	
	I1115 10:01:40.324617  630269 ubuntu.go:182] provisioning hostname "newest-cni-783113"
	I1115 10:01:40.324689  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:40.350677  630269 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:40.350959  630269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1115 10:01:40.350973  630269 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-783113 && echo "newest-cni-783113" | sudo tee /etc/hostname
	I1115 10:01:40.508764  630269 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-783113
	
	I1115 10:01:40.508844  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:40.532796  630269 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:40.533120  630269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1115 10:01:40.533149  630269 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-783113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-783113/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-783113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:01:40.680041  630269 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:01:40.680077  630269 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 10:01:40.680102  630269 ubuntu.go:190] setting up certificates
	I1115 10:01:40.680116  630269 provision.go:84] configureAuth start
	I1115 10:01:40.680180  630269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-783113
	I1115 10:01:40.702046  630269 provision.go:143] copyHostCerts
	I1115 10:01:40.702129  630269 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 10:01:40.702156  630269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 10:01:40.702239  630269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 10:01:40.702431  630269 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 10:01:40.702447  630269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 10:01:40.702499  630269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 10:01:40.702621  630269 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 10:01:40.702633  630269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 10:01:40.702672  630269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 10:01:40.702753  630269 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.newest-cni-783113 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-783113]
	I1115 10:01:41.272872  630269 provision.go:177] copyRemoteCerts
	I1115 10:01:41.272953  630269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:01:41.273006  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:41.295984  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:41.400460  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:01:41.423927  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:01:41.447638  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1115 10:01:41.470623  630269 provision.go:87] duration metric: took 790.487808ms to configureAuth
	I1115 10:01:41.470658  630269 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:01:41.470860  630269 config.go:182] Loaded profile config "newest-cni-783113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:41.470997  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:41.495469  630269 main.go:143] libmachine: Using SSH client type: native
	I1115 10:01:41.495829  630269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I1115 10:01:41.495852  630269 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:01:38.217739  622837 addons.go:515] duration metric: took 494.531476ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:01:38.532575  622837 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-034018" context rescaled to 1 replicas
	W1115 10:01:40.033169  622837 node_ready.go:57] node "auto-034018" has "Ready":"False" status (will retry)
	I1115 10:01:41.823347  630269 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:01:41.823381  630269 machine.go:97] duration metric: took 4.669257717s to provisionDockerMachine
	I1115 10:01:41.823411  630269 start.go:293] postStartSetup for "newest-cni-783113" (driver="docker")
	I1115 10:01:41.823425  630269 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:01:41.823521  630269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:01:41.823589  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:41.848894  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:41.953441  630269 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:01:41.958096  630269 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:01:41.958148  630269 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:01:41.958160  630269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 10:01:41.958210  630269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 10:01:41.958302  630269 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 10:01:41.958425  630269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:01:41.969169  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:01:41.992980  630269 start.go:296] duration metric: took 169.549789ms for postStartSetup
	I1115 10:01:41.993090  630269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:01:41.993141  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:42.016721  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:42.120401  630269 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:01:42.126863  630269 fix.go:56] duration metric: took 5.34933661s for fixHost
	I1115 10:01:42.126892  630269 start.go:83] releasing machines lock for "newest-cni-783113", held for 5.349393456s
	I1115 10:01:42.126965  630269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-783113
	I1115 10:01:42.150208  630269 ssh_runner.go:195] Run: cat /version.json
	I1115 10:01:42.150275  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:42.150298  630269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:01:42.150386  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:42.173111  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:42.173729  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:42.352356  630269 ssh_runner.go:195] Run: systemctl --version
	I1115 10:01:42.361968  630269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:01:42.407310  630269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:01:42.413943  630269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:01:42.414031  630269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:01:42.424746  630269 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1115 10:01:42.424780  630269 start.go:496] detecting cgroup driver to use...
	I1115 10:01:42.424817  630269 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 10:01:42.424868  630269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:01:42.447299  630269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:01:42.464461  630269 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:01:42.464532  630269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:01:42.486230  630269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:01:42.499738  630269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:01:42.590820  630269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:01:42.681689  630269 docker.go:234] disabling docker service ...
	I1115 10:01:42.681760  630269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:01:42.699140  630269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:01:42.714382  630269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:01:42.814410  630269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:01:42.923145  630269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:01:42.943266  630269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:01:42.964849  630269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:01:42.964916  630269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:42.977953  630269 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 10:01:42.978026  630269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:42.990913  630269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:43.004573  630269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:43.018449  630269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:01:43.030252  630269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:43.044185  630269 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:43.056944  630269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:01:43.070140  630269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:01:43.081364  630269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:01:43.092629  630269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:43.216556  630269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:01:44.266669  630269 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.050078549s)
	I1115 10:01:44.266695  630269 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:01:44.266738  630269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:01:44.271973  630269 start.go:564] Will wait 60s for crictl version
	I1115 10:01:44.272033  630269 ssh_runner.go:195] Run: which crictl
	I1115 10:01:44.275884  630269 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:01:44.308189  630269 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:01:44.308263  630269 ssh_runner.go:195] Run: crio --version
	I1115 10:01:44.346982  630269 ssh_runner.go:195] Run: crio --version
	I1115 10:01:44.391434  630269 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:01:44.392786  630269 cli_runner.go:164] Run: docker network inspect newest-cni-783113 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:01:44.416032  630269 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1115 10:01:44.422091  630269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:01:44.441506  630269 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1115 10:01:40.343919  625726 pod_ready.go:104] pod "coredns-66bc5c9577-6gvgh" is not "Ready", error: <nil>
	W1115 10:01:42.843258  625726 pod_ready.go:104] pod "coredns-66bc5c9577-6gvgh" is not "Ready", error: <nil>
	W1115 10:01:44.843888  625726 pod_ready.go:104] pod "coredns-66bc5c9577-6gvgh" is not "Ready", error: <nil>
	I1115 10:01:44.442993  630269 kubeadm.go:884] updating cluster {Name:newest-cni-783113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-783113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:01:44.443181  630269 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:01:44.443265  630269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:01:44.487040  630269 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:01:44.487061  630269 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:01:44.487103  630269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:01:44.519622  630269 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:01:44.519646  630269 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:01:44.519658  630269 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1115 10:01:44.519802  630269 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-783113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-783113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1115 10:01:44.519909  630269 ssh_runner.go:195] Run: crio config
	I1115 10:01:44.581593  630269 cni.go:84] Creating CNI manager for ""
	I1115 10:01:44.581621  630269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 10:01:44.581646  630269 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1115 10:01:44.581679  630269 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-783113 NodeName:newest-cni-783113 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:01:44.581873  630269 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-783113"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:01:44.581962  630269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:01:44.592094  630269 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:01:44.592162  630269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:01:44.601958  630269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1115 10:01:44.616915  630269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:01:44.633820  630269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1115 10:01:44.649367  630269 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:01:44.654293  630269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:01:44.667589  630269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:44.778596  630269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:01:44.808420  630269 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113 for IP: 192.168.103.2
	I1115 10:01:44.808441  630269 certs.go:195] generating shared ca certs ...
	I1115 10:01:44.808458  630269 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:44.808625  630269 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 10:01:44.808701  630269 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 10:01:44.808721  630269 certs.go:257] generating profile certs ...
	I1115 10:01:44.808837  630269 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/client.key
	I1115 10:01:44.808925  630269 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/apiserver.key.93e7bed8
	I1115 10:01:44.808987  630269 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/proxy-client.key
	I1115 10:01:44.809144  630269 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 10:01:44.809191  630269 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 10:01:44.809207  630269 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:01:44.809246  630269 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:01:44.809281  630269 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:01:44.809313  630269 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 10:01:44.809370  630269 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:01:44.810083  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:01:44.831783  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:01:44.852658  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:01:44.873910  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:01:44.899604  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1115 10:01:44.921251  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:01:44.940567  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:01:44.959620  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/newest-cni-783113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:01:44.979553  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:01:44.999671  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 10:01:45.019836  630269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 10:01:45.040042  630269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:01:45.054101  630269 ssh_runner.go:195] Run: openssl version
	I1115 10:01:45.060561  630269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:01:45.069731  630269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:01:45.073828  630269 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:01:45.073897  630269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:01:45.114642  630269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:01:45.123783  630269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 10:01:45.133086  630269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 10:01:45.137207  630269 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 10:01:45.137270  630269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 10:01:45.175795  630269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 10:01:45.185027  630269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 10:01:45.194349  630269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 10:01:45.198233  630269 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 10:01:45.198292  630269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 10:01:45.233957  630269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:01:45.242873  630269 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:01:45.247038  630269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1115 10:01:45.281334  630269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1115 10:01:45.325481  630269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1115 10:01:45.365221  630269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1115 10:01:45.401797  630269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1115 10:01:45.438910  630269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1115 10:01:45.474313  630269 kubeadm.go:401] StartCluster: {Name:newest-cni-783113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-783113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:01:45.474443  630269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:01:45.474510  630269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:01:45.505308  630269 cri.go:89] found id: ""
	I1115 10:01:45.505380  630269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:01:45.516488  630269 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1115 10:01:45.516516  630269 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1115 10:01:45.516575  630269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1115 10:01:45.527652  630269 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1115 10:01:45.529219  630269 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-783113" does not appear in /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:01:45.530160  630269 kubeconfig.go:62] /home/jenkins/minikube-integration/21895-355485/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-783113" cluster setting kubeconfig missing "newest-cni-783113" context setting]
	I1115 10:01:45.531415  630269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:45.533838  630269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1115 10:01:45.544464  630269 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1115 10:01:45.544505  630269 kubeadm.go:602] duration metric: took 27.980331ms to restartPrimaryControlPlane
	I1115 10:01:45.544530  630269 kubeadm.go:403] duration metric: took 70.23014ms to StartCluster
	I1115 10:01:45.544548  630269 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:45.544625  630269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:01:45.547151  630269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:01:45.547516  630269 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:01:45.547665  630269 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:01:45.547761  630269 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-783113"
	I1115 10:01:45.547790  630269 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-783113"
	W1115 10:01:45.547802  630269 addons.go:248] addon storage-provisioner should already be in state true
	I1115 10:01:45.547832  630269 host.go:66] Checking if "newest-cni-783113" exists ...
	I1115 10:01:45.547841  630269 addons.go:70] Setting dashboard=true in profile "newest-cni-783113"
	I1115 10:01:45.547869  630269 addons.go:239] Setting addon dashboard=true in "newest-cni-783113"
	I1115 10:01:45.547732  630269 config.go:182] Loaded profile config "newest-cni-783113": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:01:45.547920  630269 addons.go:70] Setting default-storageclass=true in profile "newest-cni-783113"
	I1115 10:01:45.547942  630269 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-783113"
	W1115 10:01:45.547878  630269 addons.go:248] addon dashboard should already be in state true
	I1115 10:01:45.547983  630269 host.go:66] Checking if "newest-cni-783113" exists ...
	I1115 10:01:45.548268  630269 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:45.548385  630269 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:45.548487  630269 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:45.552224  630269 out.go:179] * Verifying Kubernetes components...
	I1115 10:01:45.553726  630269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:01:45.574524  630269 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1115 10:01:45.574532  630269 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:01:45.574958  630269 addons.go:239] Setting addon default-storageclass=true in "newest-cni-783113"
	W1115 10:01:45.574984  630269 addons.go:248] addon default-storageclass should already be in state true
	I1115 10:01:45.575014  630269 host.go:66] Checking if "newest-cni-783113" exists ...
	I1115 10:01:45.575500  630269 cli_runner.go:164] Run: docker container inspect newest-cni-783113 --format={{.State.Status}}
	I1115 10:01:45.579622  630269 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:01:45.579647  630269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:01:45.579705  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:45.581119  630269 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1115 10:01:45.582370  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1115 10:01:45.582405  630269 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1115 10:01:45.582481  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:45.607283  630269 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:01:45.607307  630269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:01:45.607379  630269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-783113
	I1115 10:01:45.609766  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:45.613177  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:45.630051  630269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/newest-cni-783113/id_rsa Username:docker}
	I1115 10:01:45.677742  630269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:01:45.694351  630269 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:01:45.694440  630269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:01:45.716509  630269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:01:45.720566  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1115 10:01:45.720592  630269 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1115 10:01:45.734188  630269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:01:45.747440  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1115 10:01:45.747469  630269 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1115 10:01:45.769226  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1115 10:01:45.769261  630269 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1115 10:01:45.790228  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1115 10:01:45.790257  630269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1115 10:01:45.806813  630269 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1115 10:01:45.806868  630269 retry.go:31] will retry after 181.641891ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1115 10:01:45.807318  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1115 10:01:45.807335  630269 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1115 10:01:45.822370  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1115 10:01:45.822441  630269 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1115 10:01:45.836838  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1115 10:01:45.836872  630269 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1115 10:01:45.853850  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1115 10:01:45.853877  630269 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1115 10:01:45.866994  630269 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:01:45.867019  630269 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1115 10:01:45.879306  630269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1115 10:01:45.989739  630269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:01:46.194770  630269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1115 10:01:42.033294  622837 node_ready.go:57] node "auto-034018" has "Ready":"False" status (will retry)
	W1115 10:01:44.533606  622837 node_ready.go:57] node "auto-034018" has "Ready":"False" status (will retry)
	I1115 10:01:47.496760  630269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.762526029s)
	I1115 10:01:47.895897  630269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.01653847s)
	I1115 10:01:47.897073  630269 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-783113 addons enable metrics-server
	
	I1115 10:01:48.013438  630269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.023657304s)
	I1115 10:01:48.013568  630269 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.818758679s)
	I1115 10:01:48.013603  630269 api_server.go:72] duration metric: took 2.466050065s to wait for apiserver process to appear ...
	I1115 10:01:48.013613  630269 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:01:48.013636  630269 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:01:48.015318  630269 out.go:179] * Enabled addons: default-storageclass, dashboard, storage-provisioner
	I1115 10:01:48.016446  630269 addons.go:515] duration metric: took 2.468793567s for enable addons: enabled=[default-storageclass dashboard storage-provisioner]
	I1115 10:01:48.018479  630269 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:01:48.018511  630269 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:01:48.514114  630269 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:01:48.519405  630269 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1115 10:01:48.519433  630269 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1115 10:01:49.014040  630269 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:01:49.019477  630269 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:01:49.020562  630269 api_server.go:141] control plane version: v1.34.1
	I1115 10:01:49.020590  630269 api_server.go:131] duration metric: took 1.006968992s to wait for apiserver health ...
	I1115 10:01:49.020602  630269 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:01:49.023944  630269 system_pods.go:59] 8 kube-system pods found
	I1115 10:01:49.023980  630269 system_pods.go:61] "coredns-66bc5c9577-87x7w" [3f2d84f5-7f97-4a19-b552-0575a9ceb536] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:01:49.023988  630269 system_pods.go:61] "etcd-newest-cni-783113" [2ea0aa42-7852-499c-8e8e-c5e1cfeb5707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1115 10:01:49.023994  630269 system_pods.go:61] "kindnet-zjdf2" [f7a3d406-4576-45ea-a09e-00df6579f9df] Running
	I1115 10:01:49.024000  630269 system_pods.go:61] "kube-apiserver-newest-cni-783113" [2313995d-c79b-4e18-8b97-3463f3d95a8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1115 10:01:49.024005  630269 system_pods.go:61] "kube-controller-manager-newest-cni-783113" [d3439ed1-3ef3-4865-9ff8-42c82ac3cfc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1115 10:01:49.024014  630269 system_pods.go:61] "kube-proxy-bqp7j" [19ca680a-9bd3-4943-842b-7ef042aa6e0e] Running
	I1115 10:01:49.024021  630269 system_pods.go:61] "kube-scheduler-newest-cni-783113" [8feea409-ed92-4a4d-8df7-39898903b818] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1115 10:01:49.024028  630269 system_pods.go:61] "storage-provisioner" [830eb5ed-8939-4ca1-a08d-440456d95a53] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1115 10:01:49.024035  630269 system_pods.go:74] duration metric: took 3.425902ms to wait for pod list to return data ...
	I1115 10:01:49.024044  630269 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:01:49.026716  630269 default_sa.go:45] found service account: "default"
	I1115 10:01:49.026747  630269 default_sa.go:55] duration metric: took 2.68713ms for default service account to be created ...
	I1115 10:01:49.026763  630269 kubeadm.go:587] duration metric: took 3.479209382s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1115 10:01:49.026786  630269 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:01:49.029365  630269 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:01:49.029428  630269 node_conditions.go:123] node cpu capacity is 8
	I1115 10:01:49.029467  630269 node_conditions.go:105] duration metric: took 2.656322ms to run NodePressure ...
	I1115 10:01:49.029489  630269 start.go:242] waiting for startup goroutines ...
	I1115 10:01:49.029502  630269 start.go:247] waiting for cluster config update ...
	I1115 10:01:49.029517  630269 start.go:256] writing updated cluster config ...
	I1115 10:01:49.029853  630269 ssh_runner.go:195] Run: rm -f paused
	I1115 10:01:49.083359  630269 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:01:49.085448  630269 out.go:179] * Done! kubectl is now configured to use "newest-cni-783113" cluster and "default" namespace by default
	W1115 10:01:47.343136  625726 pod_ready.go:104] pod "coredns-66bc5c9577-6gvgh" is not "Ready", error: <nil>
	W1115 10:01:49.843498  625726 pod_ready.go:104] pod "coredns-66bc5c9577-6gvgh" is not "Ready", error: <nil>
	W1115 10:01:47.033082  622837 node_ready.go:57] node "auto-034018" has "Ready":"False" status (will retry)
	I1115 10:01:49.032793  622837 node_ready.go:49] node "auto-034018" is "Ready"
	I1115 10:01:49.032825  622837 node_ready.go:38] duration metric: took 11.003439671s for node "auto-034018" to be "Ready" ...
	I1115 10:01:49.032844  622837 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:01:49.032907  622837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:01:49.045496  622837 api_server.go:72] duration metric: took 11.322361735s to wait for apiserver process to appear ...
	I1115 10:01:49.045528  622837 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:01:49.045553  622837 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1115 10:01:49.050595  622837 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1115 10:01:49.051757  622837 api_server.go:141] control plane version: v1.34.1
	I1115 10:01:49.051786  622837 api_server.go:131] duration metric: took 6.250572ms to wait for apiserver health ...
	I1115 10:01:49.051798  622837 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:01:49.055597  622837 system_pods.go:59] 8 kube-system pods found
	I1115 10:01:49.055642  622837 system_pods.go:61] "coredns-66bc5c9577-gxsbr" [2791d34a-f12f-405e-bf11-ca857ff63259] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:01:49.055660  622837 system_pods.go:61] "etcd-auto-034018" [4287824b-56ef-4250-8b4d-a5cde713cad1] Running
	I1115 10:01:49.055668  622837 system_pods.go:61] "kindnet-jbw6d" [60746de5-a450-42ec-8dba-cccdc2536e86] Running
	I1115 10:01:49.055673  622837 system_pods.go:61] "kube-apiserver-auto-034018" [e921a2a2-a70a-45f8-b30d-3b803a856590] Running
	I1115 10:01:49.055682  622837 system_pods.go:61] "kube-controller-manager-auto-034018" [b1910171-7727-4a36-a9b3-5569e82cddd5] Running
	I1115 10:01:49.055691  622837 system_pods.go:61] "kube-proxy-9pmmv" [b8ad36bf-b68c-49ec-89ce-f1a27d8c6971] Running
	I1115 10:01:49.055696  622837 system_pods.go:61] "kube-scheduler-auto-034018" [aeec3869-903f-49f7-b392-7ea75c0e6fb9] Running
	I1115 10:01:49.055706  622837 system_pods.go:61] "storage-provisioner" [908d198a-7280-4d12-9019-cc8d4dc78821] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:01:49.055717  622837 system_pods.go:74] duration metric: took 3.911203ms to wait for pod list to return data ...
	I1115 10:01:49.055731  622837 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:01:49.058224  622837 default_sa.go:45] found service account: "default"
	I1115 10:01:49.058245  622837 default_sa.go:55] duration metric: took 2.504349ms for default service account to be created ...
	I1115 10:01:49.058256  622837 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:01:49.061388  622837 system_pods.go:86] 8 kube-system pods found
	I1115 10:01:49.061441  622837 system_pods.go:89] "coredns-66bc5c9577-gxsbr" [2791d34a-f12f-405e-bf11-ca857ff63259] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:01:49.061446  622837 system_pods.go:89] "etcd-auto-034018" [4287824b-56ef-4250-8b4d-a5cde713cad1] Running
	I1115 10:01:49.061456  622837 system_pods.go:89] "kindnet-jbw6d" [60746de5-a450-42ec-8dba-cccdc2536e86] Running
	I1115 10:01:49.061459  622837 system_pods.go:89] "kube-apiserver-auto-034018" [e921a2a2-a70a-45f8-b30d-3b803a856590] Running
	I1115 10:01:49.061470  622837 system_pods.go:89] "kube-controller-manager-auto-034018" [b1910171-7727-4a36-a9b3-5569e82cddd5] Running
	I1115 10:01:49.061475  622837 system_pods.go:89] "kube-proxy-9pmmv" [b8ad36bf-b68c-49ec-89ce-f1a27d8c6971] Running
	I1115 10:01:49.061479  622837 system_pods.go:89] "kube-scheduler-auto-034018" [aeec3869-903f-49f7-b392-7ea75c0e6fb9] Running
	I1115 10:01:49.061486  622837 system_pods.go:89] "storage-provisioner" [908d198a-7280-4d12-9019-cc8d4dc78821] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:01:49.061528  622837 retry.go:31] will retry after 312.347036ms: missing components: kube-dns
	I1115 10:01:49.382127  622837 system_pods.go:86] 8 kube-system pods found
	I1115 10:01:49.382188  622837 system_pods.go:89] "coredns-66bc5c9577-gxsbr" [2791d34a-f12f-405e-bf11-ca857ff63259] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:01:49.382197  622837 system_pods.go:89] "etcd-auto-034018" [4287824b-56ef-4250-8b4d-a5cde713cad1] Running
	I1115 10:01:49.382212  622837 system_pods.go:89] "kindnet-jbw6d" [60746de5-a450-42ec-8dba-cccdc2536e86] Running
	I1115 10:01:49.382226  622837 system_pods.go:89] "kube-apiserver-auto-034018" [e921a2a2-a70a-45f8-b30d-3b803a856590] Running
	I1115 10:01:49.382233  622837 system_pods.go:89] "kube-controller-manager-auto-034018" [b1910171-7727-4a36-a9b3-5569e82cddd5] Running
	I1115 10:01:49.382242  622837 system_pods.go:89] "kube-proxy-9pmmv" [b8ad36bf-b68c-49ec-89ce-f1a27d8c6971] Running
	I1115 10:01:49.382248  622837 system_pods.go:89] "kube-scheduler-auto-034018" [aeec3869-903f-49f7-b392-7ea75c0e6fb9] Running
	I1115 10:01:49.382259  622837 system_pods.go:89] "storage-provisioner" [908d198a-7280-4d12-9019-cc8d4dc78821] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:01:49.382283  622837 retry.go:31] will retry after 383.502025ms: missing components: kube-dns
	I1115 10:01:49.771364  622837 system_pods.go:86] 8 kube-system pods found
	I1115 10:01:49.771442  622837 system_pods.go:89] "coredns-66bc5c9577-gxsbr" [2791d34a-f12f-405e-bf11-ca857ff63259] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:01:49.771451  622837 system_pods.go:89] "etcd-auto-034018" [4287824b-56ef-4250-8b4d-a5cde713cad1] Running
	I1115 10:01:49.771460  622837 system_pods.go:89] "kindnet-jbw6d" [60746de5-a450-42ec-8dba-cccdc2536e86] Running
	I1115 10:01:49.771466  622837 system_pods.go:89] "kube-apiserver-auto-034018" [e921a2a2-a70a-45f8-b30d-3b803a856590] Running
	I1115 10:01:49.771475  622837 system_pods.go:89] "kube-controller-manager-auto-034018" [b1910171-7727-4a36-a9b3-5569e82cddd5] Running
	I1115 10:01:49.771481  622837 system_pods.go:89] "kube-proxy-9pmmv" [b8ad36bf-b68c-49ec-89ce-f1a27d8c6971] Running
	I1115 10:01:49.771486  622837 system_pods.go:89] "kube-scheduler-auto-034018" [aeec3869-903f-49f7-b392-7ea75c0e6fb9] Running
	I1115 10:01:49.771497  622837 system_pods.go:89] "storage-provisioner" [908d198a-7280-4d12-9019-cc8d4dc78821] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:01:49.771530  622837 retry.go:31] will retry after 425.951583ms: missing components: kube-dns
	I1115 10:01:50.201588  622837 system_pods.go:86] 8 kube-system pods found
	I1115 10:01:50.201631  622837 system_pods.go:89] "coredns-66bc5c9577-gxsbr" [2791d34a-f12f-405e-bf11-ca857ff63259] Running
	I1115 10:01:50.201639  622837 system_pods.go:89] "etcd-auto-034018" [4287824b-56ef-4250-8b4d-a5cde713cad1] Running
	I1115 10:01:50.201645  622837 system_pods.go:89] "kindnet-jbw6d" [60746de5-a450-42ec-8dba-cccdc2536e86] Running
	I1115 10:01:50.201650  622837 system_pods.go:89] "kube-apiserver-auto-034018" [e921a2a2-a70a-45f8-b30d-3b803a856590] Running
	I1115 10:01:50.201655  622837 system_pods.go:89] "kube-controller-manager-auto-034018" [b1910171-7727-4a36-a9b3-5569e82cddd5] Running
	I1115 10:01:50.201662  622837 system_pods.go:89] "kube-proxy-9pmmv" [b8ad36bf-b68c-49ec-89ce-f1a27d8c6971] Running
	I1115 10:01:50.201669  622837 system_pods.go:89] "kube-scheduler-auto-034018" [aeec3869-903f-49f7-b392-7ea75c0e6fb9] Running
	I1115 10:01:50.201684  622837 system_pods.go:89] "storage-provisioner" [908d198a-7280-4d12-9019-cc8d4dc78821] Running
	I1115 10:01:50.201697  622837 system_pods.go:126] duration metric: took 1.143432108s to wait for k8s-apps to be running ...
	I1115 10:01:50.201711  622837 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:01:50.201765  622837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:01:50.215182  622837 system_svc.go:56] duration metric: took 13.458446ms WaitForService to wait for kubelet
	I1115 10:01:50.215216  622837 kubeadm.go:587] duration metric: took 12.492092029s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:01:50.215238  622837 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:01:50.218199  622837 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:01:50.218226  622837 node_conditions.go:123] node cpu capacity is 8
	I1115 10:01:50.218239  622837 node_conditions.go:105] duration metric: took 2.997277ms to run NodePressure ...
	I1115 10:01:50.218253  622837 start.go:242] waiting for startup goroutines ...
	I1115 10:01:50.218264  622837 start.go:247] waiting for cluster config update ...
	I1115 10:01:50.218277  622837 start.go:256] writing updated cluster config ...
	I1115 10:01:50.218619  622837 ssh_runner.go:195] Run: rm -f paused
	I1115 10:01:50.222554  622837 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:01:50.226291  622837 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gxsbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.230113  622837 pod_ready.go:94] pod "coredns-66bc5c9577-gxsbr" is "Ready"
	I1115 10:01:50.230132  622837 pod_ready.go:86] duration metric: took 3.812766ms for pod "coredns-66bc5c9577-gxsbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.231926  622837 pod_ready.go:83] waiting for pod "etcd-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.235432  622837 pod_ready.go:94] pod "etcd-auto-034018" is "Ready"
	I1115 10:01:50.235453  622837 pod_ready.go:86] duration metric: took 3.506292ms for pod "etcd-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.237129  622837 pod_ready.go:83] waiting for pod "kube-apiserver-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.240800  622837 pod_ready.go:94] pod "kube-apiserver-auto-034018" is "Ready"
	I1115 10:01:50.240817  622837 pod_ready.go:86] duration metric: took 3.670017ms for pod "kube-apiserver-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.242595  622837 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.627274  622837 pod_ready.go:94] pod "kube-controller-manager-auto-034018" is "Ready"
	I1115 10:01:50.627308  622837 pod_ready.go:86] duration metric: took 384.693592ms for pod "kube-controller-manager-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:50.827881  622837 pod_ready.go:83] waiting for pod "kube-proxy-9pmmv" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:51.226782  622837 pod_ready.go:94] pod "kube-proxy-9pmmv" is "Ready"
	I1115 10:01:51.226810  622837 pod_ready.go:86] duration metric: took 398.903606ms for pod "kube-proxy-9pmmv" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:51.427366  622837 pod_ready.go:83] waiting for pod "kube-scheduler-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:51.826728  622837 pod_ready.go:94] pod "kube-scheduler-auto-034018" is "Ready"
	I1115 10:01:51.826767  622837 pod_ready.go:86] duration metric: took 399.351016ms for pod "kube-scheduler-auto-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:01:51.826796  622837 pod_ready.go:40] duration metric: took 1.604201385s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:01:51.878880  622837 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:01:51.880846  622837 out.go:179] * Done! kubectl is now configured to use "auto-034018" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.188741113Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.192814405Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=90d9054b-1ee1-42df-8ee7-a29ce9c98d12 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.193416308Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6fc7b2e6-1b4d-48cb-85c1-8deaf35da78e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.19447033Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.194913475Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.195198028Z" level=info msg="Ran pod sandbox 00e7796884f8e68d27c4039b92cbb9742d0b4bac7a43171fbe4ff26b58a8d621 with infra container: kube-system/kindnet-zjdf2/POD" id=90d9054b-1ee1-42df-8ee7-a29ce9c98d12 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.195563471Z" level=info msg="Ran pod sandbox e2fcee80c030b50ce74aeaa547cda9536e7bab229d55de0fbe62e51639f20a5b with infra container: kube-system/kube-proxy-bqp7j/POD" id=6fc7b2e6-1b4d-48cb-85c1-8deaf35da78e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.19636922Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=db31bba0-1873-4135-a84d-4f1e0190840c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.196627374Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d704c136-cf5a-4563-bb0a-679d6a870486 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.197283729Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9d892a21-06ef-4383-9e70-6424064f18d5 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.197532302Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=17340a4a-d1b6-4121-8f22-10307a5cb7bd name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.198431743Z" level=info msg="Creating container: kube-system/kindnet-zjdf2/kindnet-cni" id=1dbce062-e56d-40f9-99c0-9d568086cdc5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.198520311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.198431746Z" level=info msg="Creating container: kube-system/kube-proxy-bqp7j/kube-proxy" id=b2dc1601-ebea-4517-9366-8269f69c1fa8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.198668827Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.205534575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.206026841Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.207979604Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.208458747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.235407253Z" level=info msg="Created container 177965edad35ab7bc4ac03ef33d5c8ac0548da2de0546df9d0b1167b6823c792: kube-system/kindnet-zjdf2/kindnet-cni" id=1dbce062-e56d-40f9-99c0-9d568086cdc5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.235992478Z" level=info msg="Starting container: 177965edad35ab7bc4ac03ef33d5c8ac0548da2de0546df9d0b1167b6823c792" id=555e2bde-8b1b-4c61-bd3d-d188bcd0d95a name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.237842741Z" level=info msg="Started container" PID=1044 containerID=177965edad35ab7bc4ac03ef33d5c8ac0548da2de0546df9d0b1167b6823c792 description=kube-system/kindnet-zjdf2/kindnet-cni id=555e2bde-8b1b-4c61-bd3d-d188bcd0d95a name=/runtime.v1.RuntimeService/StartContainer sandboxID=00e7796884f8e68d27c4039b92cbb9742d0b4bac7a43171fbe4ff26b58a8d621
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.23878445Z" level=info msg="Created container 3ad1b9ceb1dbf75e014776fa482c1eba37c87d155fc5b52311a23c67ad452966: kube-system/kube-proxy-bqp7j/kube-proxy" id=b2dc1601-ebea-4517-9366-8269f69c1fa8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.239576716Z" level=info msg="Starting container: 3ad1b9ceb1dbf75e014776fa482c1eba37c87d155fc5b52311a23c67ad452966" id=b162d96b-00f3-4523-89c9-866cbd77bb79 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:01:48 newest-cni-783113 crio[524]: time="2025-11-15T10:01:48.242667686Z" level=info msg="Started container" PID=1045 containerID=3ad1b9ceb1dbf75e014776fa482c1eba37c87d155fc5b52311a23c67ad452966 description=kube-system/kube-proxy-bqp7j/kube-proxy id=b162d96b-00f3-4523-89c9-866cbd77bb79 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e2fcee80c030b50ce74aeaa547cda9536e7bab229d55de0fbe62e51639f20a5b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3ad1b9ceb1dbf       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   e2fcee80c030b       kube-proxy-bqp7j                            kube-system
	177965edad35a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   00e7796884f8e       kindnet-zjdf2                               kube-system
	b347dba9b065d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   4572ad29702e1       kube-apiserver-newest-cni-783113            kube-system
	9409cc92c0e96       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   0d9b202b03e1d       kube-scheduler-newest-cni-783113            kube-system
	5f919a2e9786b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   206c3b2d011a7       etcd-newest-cni-783113                      kube-system
	85cc4b53b2889       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   e3661a25eb63a       kube-controller-manager-newest-cni-783113   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-783113
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-783113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=newest-cni-783113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_01_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:01:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-783113
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:01:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:01:47 +0000   Sat, 15 Nov 2025 10:01:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:01:47 +0000   Sat, 15 Nov 2025 10:01:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:01:47 +0000   Sat, 15 Nov 2025 10:01:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 15 Nov 2025 10:01:47 +0000   Sat, 15 Nov 2025 10:01:16 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-783113
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                d180c89e-341a-4dbc-bc47-54c5b0042756
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-783113                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-zjdf2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-783113             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-783113    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-bqp7j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-783113             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node newest-cni-783113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node newest-cni-783113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x8 over 41s)  kubelet          Node newest-cni-783113 status is now: NodeHasSufficientPID
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    34s                kubelet          Node newest-cni-783113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s                kubelet          Node newest-cni-783113 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  34s                kubelet          Node newest-cni-783113 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           30s                node-controller  Node newest-cni-783113 event: Registered Node newest-cni-783113 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 10s)   kubelet          Node newest-cni-783113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 10s)   kubelet          Node newest-cni-783113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x8 over 10s)   kubelet          Node newest-cni-783113 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-783113 event: Registered Node newest-cni-783113 in Controller
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [5f919a2e9786b1d58ad021f0e0907f1c99dc24c7a50298e330d71f4da52c9e03] <==
	{"level":"warn","ts":"2025-11-15T10:01:46.759730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.765939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.774869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.781107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.787520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.793797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.805990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.813428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.820675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.828531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.834821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.841832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.849044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.856269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.862435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.868762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.876202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.882770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.890539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.900250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.906783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.927962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.935298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.943104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:46.983580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41884","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:01:55 up  1:44,  0 user,  load average: 5.28, 3.33, 2.09
	Linux newest-cni-783113 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [177965edad35ab7bc4ac03ef33d5c8ac0548da2de0546df9d0b1167b6823c792] <==
	I1115 10:01:48.434063       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:01:48.434333       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1115 10:01:48.434483       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:01:48.434504       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:01:48.434528       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:01:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:01:48.725274       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:01:48.725308       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:01:48.725322       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:01:48.725481       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:01:49.125526       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:01:49.125568       1 metrics.go:72] Registering metrics
	I1115 10:01:49.125682       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [b347dba9b065dbc9ab312f9e85bb5958e47274c599716dc75f0de2924b9e3277] <==
	I1115 10:01:47.471401       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:01:47.471912       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1115 10:01:47.471977       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1115 10:01:47.472079       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:01:47.472115       1 policy_source.go:240] refreshing policies
	I1115 10:01:47.472210       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 10:01:47.472289       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:01:47.472942       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:01:47.472957       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:01:47.478984       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:01:47.493863       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1115 10:01:47.511156       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:01:47.550249       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:01:47.785628       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:01:47.814511       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:01:47.833154       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:01:47.841649       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:01:47.848533       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:01:47.880991       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.242.104"}
	I1115 10:01:47.890959       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.111.99"}
	I1115 10:01:48.374532       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:01:50.844958       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:01:51.193900       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:01:51.443892       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:01:51.443892       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [85cc4b53b288933ecd9863c2e7cd92befe5f1dffe99dfce282a0efb376cc5e26] <==
	I1115 10:01:50.800378       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:01:50.804637       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1115 10:01:50.804705       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1115 10:01:50.807235       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:01:50.814548       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-783113"
	I1115 10:01:50.814692       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1115 10:01:50.840387       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1115 10:01:50.840480       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:01:50.840480       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:01:50.840654       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:01:50.840702       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 10:01:50.840927       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1115 10:01:50.841044       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:01:50.841058       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1115 10:01:50.841146       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:01:50.841254       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:01:50.842538       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 10:01:50.846213       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:01:50.846258       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:01:50.846377       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:01:50.849634       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:01:50.849642       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:01:50.854739       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1115 10:01:50.856998       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:01:50.857969       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [3ad1b9ceb1dbf75e014776fa482c1eba37c87d155fc5b52311a23c67ad452966] <==
	I1115 10:01:48.276180       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:01:48.346266       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:01:48.446656       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:01:48.446716       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1115 10:01:48.446812       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:01:48.469772       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:01:48.469839       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:01:48.475797       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:01:48.476238       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:01:48.476268       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:01:48.479862       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:01:48.479882       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:01:48.479897       1 config.go:309] "Starting node config controller"
	I1115 10:01:48.479905       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:01:48.479910       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:01:48.479912       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:01:48.479917       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:01:48.479900       1 config.go:200] "Starting service config controller"
	I1115 10:01:48.479926       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:01:48.581038       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:01:48.581838       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:01:48.581850       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9409cc92c0e96c6895a87fb31f50ae5a740a26c9e4370bfc6e46f8f7dd07e7a7] <==
	I1115 10:01:46.788741       1 serving.go:386] Generated self-signed cert in-memory
	I1115 10:01:48.018903       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:01:48.018930       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:01:48.023202       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:01:48.023202       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 10:01:48.023237       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:01:48.023250       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:01:48.023263       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:01:48.023252       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 10:01:48.024079       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:01:48.024246       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:01:48.124411       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1115 10:01:48.124460       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:01:48.124427       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:01:46 newest-cni-783113 kubelet[663]: E1115 10:01:46.924189     663 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-783113\" not found" node="newest-cni-783113"
	Nov 15 10:01:46 newest-cni-783113 kubelet[663]: E1115 10:01:46.924308     663 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-783113\" not found" node="newest-cni-783113"
	Nov 15 10:01:46 newest-cni-783113 kubelet[663]: E1115 10:01:46.924512     663 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-783113\" not found" node="newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.485784     663 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: E1115 10:01:47.498008     663 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-783113\" already exists" pod="kube-system/kube-controller-manager-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.498177     663 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: E1115 10:01:47.505825     663 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-783113\" already exists" pod="kube-system/kube-scheduler-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.505866     663 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: E1115 10:01:47.511599     663 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-783113\" already exists" pod="kube-system/etcd-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.511631     663 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.519266     663 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: E1115 10:01:47.519472     663 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-783113\" already exists" pod="kube-system/kube-apiserver-newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.519516     663 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-783113"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.519559     663 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.520497     663 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.880384     663 apiserver.go:52] "Watching apiserver"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.885587     663 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.929644     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7a3d406-4576-45ea-a09e-00df6579f9df-xtables-lock\") pod \"kindnet-zjdf2\" (UID: \"f7a3d406-4576-45ea-a09e-00df6579f9df\") " pod="kube-system/kindnet-zjdf2"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.929689     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7a3d406-4576-45ea-a09e-00df6579f9df-lib-modules\") pod \"kindnet-zjdf2\" (UID: \"f7a3d406-4576-45ea-a09e-00df6579f9df\") " pod="kube-system/kindnet-zjdf2"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.929795     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19ca680a-9bd3-4943-842b-7ef042aa6e0e-xtables-lock\") pod \"kube-proxy-bqp7j\" (UID: \"19ca680a-9bd3-4943-842b-7ef042aa6e0e\") " pod="kube-system/kube-proxy-bqp7j"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.929864     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19ca680a-9bd3-4943-842b-7ef042aa6e0e-lib-modules\") pod \"kube-proxy-bqp7j\" (UID: \"19ca680a-9bd3-4943-842b-7ef042aa6e0e\") " pod="kube-system/kube-proxy-bqp7j"
	Nov 15 10:01:47 newest-cni-783113 kubelet[663]: I1115 10:01:47.929918     663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f7a3d406-4576-45ea-a09e-00df6579f9df-cni-cfg\") pod \"kindnet-zjdf2\" (UID: \"f7a3d406-4576-45ea-a09e-00df6579f9df\") " pod="kube-system/kindnet-zjdf2"
	Nov 15 10:01:50 newest-cni-783113 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:01:50 newest-cni-783113 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:01:50 newest-cni-783113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-783113 -n newest-cni-783113
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-783113 -n newest-cni-783113: exit status 2 (359.532963ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-783113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-87x7w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-9dhx4 kubernetes-dashboard-855c9754f9-l6h4l
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-783113 describe pod coredns-66bc5c9577-87x7w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-9dhx4 kubernetes-dashboard-855c9754f9-l6h4l
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-783113 describe pod coredns-66bc5c9577-87x7w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-9dhx4 kubernetes-dashboard-855c9754f9-l6h4l: exit status 1 (77.528775ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-87x7w" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-9dhx4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-l6h4l" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-783113 describe pod coredns-66bc5c9577-87x7w storage-provisioner dashboard-metrics-scraper-6ffb444bf9-9dhx4 kubernetes-dashboard-855c9754f9-l6h4l: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-430513 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-430513 --alsologtostderr -v=1: exit status 80 (1.923203638s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-430513 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:02:26.319064  645466 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:02:26.319310  645466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:26.319318  645466 out.go:374] Setting ErrFile to fd 2...
	I1115 10:02:26.319323  645466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:26.319573  645466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:02:26.332019  645466 out.go:368] Setting JSON to false
	I1115 10:02:26.332079  645466 mustload.go:66] Loading cluster: embed-certs-430513
	I1115 10:02:26.332615  645466 config.go:182] Loaded profile config "embed-certs-430513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:26.333180  645466 cli_runner.go:164] Run: docker container inspect embed-certs-430513 --format={{.State.Status}}
	I1115 10:02:26.357203  645466 host.go:66] Checking if "embed-certs-430513" exists ...
	I1115 10:02:26.357569  645466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:02:26.420189  645466 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:82 SystemTime:2025-11-15 10:02:26.410328955 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:02:26.473654  645466 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-430513 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:02:26.620352  645466 out.go:179] * Pausing node embed-certs-430513 ... 
	I1115 10:02:26.662578  645466 host.go:66] Checking if "embed-certs-430513" exists ...
	I1115 10:02:26.662985  645466 ssh_runner.go:195] Run: systemctl --version
	I1115 10:02:26.663036  645466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-430513
	I1115 10:02:26.698141  645466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/embed-certs-430513/id_rsa Username:docker}
	I1115 10:02:26.804052  645466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:02:26.823477  645466 pause.go:52] kubelet running: true
	I1115 10:02:26.823556  645466 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:02:26.988360  645466 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:02:26.988503  645466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:02:27.077858  645466 cri.go:89] found id: "6a9e72cff791671c4897ad13ee17d7cde68ffcfd352cb407e01b44a4e1d21988"
	I1115 10:02:27.077886  645466 cri.go:89] found id: "cbdca37a79bed3f5268c2953f014f1f188b62728299fc5e94dc49e84105b8781"
	I1115 10:02:27.077893  645466 cri.go:89] found id: "23f964d7a59fbb004c1029367966897a137f95d25128e0a59e80531fb4a8877e"
	I1115 10:02:27.077898  645466 cri.go:89] found id: "ce8bf7d712ce411f45bad7e0da6cda07264c3abd84c422e256c119681f884ced"
	I1115 10:02:27.077903  645466 cri.go:89] found id: "4caca61ffa84fa2ac3d3a7a94231508ee36b8ef8047706a8c4c1af15b3e8e74f"
	I1115 10:02:27.077907  645466 cri.go:89] found id: "7884e9381d1df9759c7a3893af1cf75c8acb92edff2489e9e07e1d1d4102b7df"
	I1115 10:02:27.077912  645466 cri.go:89] found id: "5fecf1854c34c29514b1ec6c6221755aeaa0b46dbd1e7d27edaf9fa5c71f7871"
	I1115 10:02:27.077917  645466 cri.go:89] found id: "edbf223b01e791d146a5f2ad465d24c0a6d60f196e80f447883f5851e9f2a5af"
	I1115 10:02:27.077921  645466 cri.go:89] found id: "aa074b22936792966ead83faadae096faa591efe77ef77f4c0e0ec3344f4e2e9"
	I1115 10:02:27.077943  645466 cri.go:89] found id: "50a34c1d55c4eea90f8947e70ee270cc75958a09a906d68196ca932a492dd969"
	I1115 10:02:27.077952  645466 cri.go:89] found id: "e9065e2f0ff841f373976332fe044a958d99160f1ca62ef99f93d1a22174fdeb"
	I1115 10:02:27.077956  645466 cri.go:89] found id: ""
	I1115 10:02:27.078006  645466 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:02:27.091025  645466 retry.go:31] will retry after 289.677282ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:02:27Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:02:27.381346  645466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:02:27.394691  645466 pause.go:52] kubelet running: false
	I1115 10:02:27.394762  645466 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:02:27.552844  645466 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:02:27.552920  645466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:02:27.622845  645466 cri.go:89] found id: "6a9e72cff791671c4897ad13ee17d7cde68ffcfd352cb407e01b44a4e1d21988"
	I1115 10:02:27.622874  645466 cri.go:89] found id: "cbdca37a79bed3f5268c2953f014f1f188b62728299fc5e94dc49e84105b8781"
	I1115 10:02:27.622880  645466 cri.go:89] found id: "23f964d7a59fbb004c1029367966897a137f95d25128e0a59e80531fb4a8877e"
	I1115 10:02:27.622885  645466 cri.go:89] found id: "ce8bf7d712ce411f45bad7e0da6cda07264c3abd84c422e256c119681f884ced"
	I1115 10:02:27.622889  645466 cri.go:89] found id: "4caca61ffa84fa2ac3d3a7a94231508ee36b8ef8047706a8c4c1af15b3e8e74f"
	I1115 10:02:27.622894  645466 cri.go:89] found id: "7884e9381d1df9759c7a3893af1cf75c8acb92edff2489e9e07e1d1d4102b7df"
	I1115 10:02:27.622898  645466 cri.go:89] found id: "5fecf1854c34c29514b1ec6c6221755aeaa0b46dbd1e7d27edaf9fa5c71f7871"
	I1115 10:02:27.622902  645466 cri.go:89] found id: "edbf223b01e791d146a5f2ad465d24c0a6d60f196e80f447883f5851e9f2a5af"
	I1115 10:02:27.622907  645466 cri.go:89] found id: "aa074b22936792966ead83faadae096faa591efe77ef77f4c0e0ec3344f4e2e9"
	I1115 10:02:27.622929  645466 cri.go:89] found id: "50a34c1d55c4eea90f8947e70ee270cc75958a09a906d68196ca932a492dd969"
	I1115 10:02:27.622937  645466 cri.go:89] found id: "e9065e2f0ff841f373976332fe044a958d99160f1ca62ef99f93d1a22174fdeb"
	I1115 10:02:27.622941  645466 cri.go:89] found id: ""
	I1115 10:02:27.622990  645466 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:02:27.635157  645466 retry.go:31] will retry after 226.241996ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:02:27Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:02:27.861834  645466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:02:27.882529  645466 pause.go:52] kubelet running: false
	I1115 10:02:27.882716  645466 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:02:28.070911  645466 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:02:28.071000  645466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:02:28.147844  645466 cri.go:89] found id: "6a9e72cff791671c4897ad13ee17d7cde68ffcfd352cb407e01b44a4e1d21988"
	I1115 10:02:28.147876  645466 cri.go:89] found id: "cbdca37a79bed3f5268c2953f014f1f188b62728299fc5e94dc49e84105b8781"
	I1115 10:02:28.147883  645466 cri.go:89] found id: "23f964d7a59fbb004c1029367966897a137f95d25128e0a59e80531fb4a8877e"
	I1115 10:02:28.147887  645466 cri.go:89] found id: "ce8bf7d712ce411f45bad7e0da6cda07264c3abd84c422e256c119681f884ced"
	I1115 10:02:28.147891  645466 cri.go:89] found id: "4caca61ffa84fa2ac3d3a7a94231508ee36b8ef8047706a8c4c1af15b3e8e74f"
	I1115 10:02:28.147895  645466 cri.go:89] found id: "7884e9381d1df9759c7a3893af1cf75c8acb92edff2489e9e07e1d1d4102b7df"
	I1115 10:02:28.147899  645466 cri.go:89] found id: "5fecf1854c34c29514b1ec6c6221755aeaa0b46dbd1e7d27edaf9fa5c71f7871"
	I1115 10:02:28.147903  645466 cri.go:89] found id: "edbf223b01e791d146a5f2ad465d24c0a6d60f196e80f447883f5851e9f2a5af"
	I1115 10:02:28.147907  645466 cri.go:89] found id: "aa074b22936792966ead83faadae096faa591efe77ef77f4c0e0ec3344f4e2e9"
	I1115 10:02:28.147923  645466 cri.go:89] found id: "50a34c1d55c4eea90f8947e70ee270cc75958a09a906d68196ca932a492dd969"
	I1115 10:02:28.147928  645466 cri.go:89] found id: "e9065e2f0ff841f373976332fe044a958d99160f1ca62ef99f93d1a22174fdeb"
	I1115 10:02:28.147933  645466 cri.go:89] found id: ""
	I1115 10:02:28.147981  645466 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:02:28.167298  645466 out.go:203] 
	W1115 10:02:28.168681  645466 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:02:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:02:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:02:28.168704  645466 out.go:285] * 
	* 
	W1115 10:02:28.174175  645466 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:02:28.175456  645466 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-430513 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-430513
helpers_test.go:243: (dbg) docker inspect embed-certs-430513:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307",
	        "Created": "2025-11-15T10:00:21.0128724Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 626068,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:01:25.448663378Z",
	            "FinishedAt": "2025-11-15T10:01:24.380209125Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307/hosts",
	        "LogPath": "/var/lib/docker/containers/0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307/0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307-json.log",
	        "Name": "/embed-certs-430513",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-430513:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-430513",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307",
	                "LowerDir": "/var/lib/docker/overlay2/076ef13396d6f2f2b6cb3a382a4ea2c5e0a16b7306168cd425e3d6324e5d05af-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/076ef13396d6f2f2b6cb3a382a4ea2c5e0a16b7306168cd425e3d6324e5d05af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/076ef13396d6f2f2b6cb3a382a4ea2c5e0a16b7306168cd425e3d6324e5d05af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/076ef13396d6f2f2b6cb3a382a4ea2c5e0a16b7306168cd425e3d6324e5d05af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-430513",
	                "Source": "/var/lib/docker/volumes/embed-certs-430513/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-430513",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-430513",
	                "name.minikube.sigs.k8s.io": "embed-certs-430513",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cfe8adb7aca3e307ebf87e35fe3034216bb9a48a3f6c02b3637dc26344d5ffa9",
	            "SandboxKey": "/var/run/docker/netns/cfe8adb7aca3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-430513": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b5a35f2144e5ffd9ac7511406e9418188a3c5784e35110b679aaeaa5b02f5ee9",
	                    "EndpointID": "5b65da5414305a2d45dd0df0ad496187f3de55f236f0ed465d498f822f0164c8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "26:9a:3d:ac:b8:fc",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-430513",
	                        "0d1528353148"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-430513 -n embed-certs-430513
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-430513 -n embed-certs-430513: exit status 2 (376.180499ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-430513 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-430513 logs -n 25: (1.210126209s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-034018 sudo journalctl -xeu kubelet --all --full --no-pager                                                                    │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo systemctl status docker --all --full --no-pager                                                                    │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo systemctl cat docker --no-pager                                                                                    │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /etc/docker/daemon.json                                                                                        │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo docker system info                                                                                                 │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo systemctl status cri-docker --all --full --no-pager                                                                │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo systemctl cat cri-docker --no-pager                                                                                │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cri-dockerd --version                                                                                              │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo systemctl status containerd --all --full --no-pager                                                                │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo systemctl cat containerd --no-pager                                                                                │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /lib/systemd/system/containerd.service                                                                         │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /etc/containerd/config.toml                                                                                    │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo containerd config dump                                                                                             │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo systemctl status crio --all --full --no-pager                                                                      │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo systemctl cat crio --no-pager                                                                                      │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo crio config                                                                                                        │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ delete  │ -p auto-034018                                                                                                                         │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ start   │ -p calico-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-034018      │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ image   │ embed-certs-430513 image list --format=json                                                                                            │ embed-certs-430513 │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ pause   │ -p embed-certs-430513 --alsologtostderr -v=1                                                                                           │ embed-certs-430513 │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:02:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:02:22.572627  644840 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:02:22.572892  644840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:22.572903  644840 out.go:374] Setting ErrFile to fd 2...
	I1115 10:02:22.572907  644840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:22.573104  644840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:02:22.573610  644840 out.go:368] Setting JSON to false
	I1115 10:02:22.574937  644840 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6284,"bootTime":1763194659,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:02:22.575029  644840 start.go:143] virtualization: kvm guest
	I1115 10:02:22.577244  644840 out.go:179] * [calico-034018] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:02:22.578510  644840 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:02:22.578551  644840 notify.go:221] Checking for updates...
	I1115 10:02:22.580947  644840 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:02:22.582252  644840 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:02:22.583546  644840 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 10:02:22.584811  644840 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:02:22.586184  644840 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:02:22.588005  644840 config.go:182] Loaded profile config "default-k8s-diff-port-679865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:22.588113  644840 config.go:182] Loaded profile config "embed-certs-430513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:22.588225  644840 config.go:182] Loaded profile config "kindnet-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:22.588346  644840 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:02:22.613234  644840 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:02:22.613340  644840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:02:22.675328  644840 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-15 10:02:22.663748723 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:02:22.675534  644840 docker.go:319] overlay module found
	I1115 10:02:22.677472  644840 out.go:179] * Using the docker driver based on user configuration
	I1115 10:02:22.678650  644840 start.go:309] selected driver: docker
	I1115 10:02:22.678667  644840 start.go:930] validating driver "docker" against <nil>
	I1115 10:02:22.678679  644840 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:02:22.679261  644840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:02:22.745630  644840 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-15 10:02:22.736035397 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:02:22.745779  644840 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:02:22.745973  644840 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:02:22.747887  644840 out.go:179] * Using Docker driver with root privileges
	I1115 10:02:22.749150  644840 cni.go:84] Creating CNI manager for "calico"
	I1115 10:02:22.749176  644840 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1115 10:02:22.749291  644840 start.go:353] cluster config:
	{Name:calico-034018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-034018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:02:22.750764  644840 out.go:179] * Starting "calico-034018" primary control-plane node in "calico-034018" cluster
	I1115 10:02:22.751940  644840 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:02:22.753095  644840 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:02:22.754307  644840 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:02:22.754345  644840 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:02:22.754359  644840 cache.go:65] Caching tarball of preloaded images
	I1115 10:02:22.754383  644840 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:02:22.754483  644840 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:02:22.754498  644840 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:02:22.754592  644840 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/calico-034018/config.json ...
	I1115 10:02:22.754613  644840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/calico-034018/config.json: {Name:mk1e647214f00a9b9d4fa1d08f640554ac317c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:22.776136  644840 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:02:22.776169  644840 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:02:22.776192  644840 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:02:22.776223  644840 start.go:360] acquireMachinesLock for calico-034018: {Name:mk2832fa6a8a4c61196c221c11c833ad8a48bbe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:02:22.776343  644840 start.go:364] duration metric: took 98.254µs to acquireMachinesLock for "calico-034018"
	I1115 10:02:22.776373  644840 start.go:93] Provisioning new machine with config: &{Name:calico-034018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-034018 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:02:22.776488  644840 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:02:21.770687  636459 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:02:21.775953  636459 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:02:21.775973  636459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:02:21.790080  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:02:22.063883  636459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:02:22.064056  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-034018 minikube.k8s.io/updated_at=2025_11_15T10_02_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=kindnet-034018 minikube.k8s.io/primary=true
	I1115 10:02:22.064186  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:22.076061  636459 ops.go:34] apiserver oom_adj: -16
	I1115 10:02:22.156857  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:22.657841  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:23.157877  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:23.657099  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1115 10:02:22.214550  635342 pod_ready.go:104] pod "coredns-66bc5c9577-wknnh" is not "Ready", error: <nil>
	W1115 10:02:24.216479  635342 pod_ready.go:104] pod "coredns-66bc5c9577-wknnh" is not "Ready", error: <nil>
	I1115 10:02:24.157608  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:24.657831  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:25.157800  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:25.657561  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:26.157480  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:26.268041  636459 kubeadm.go:1114] duration metric: took 4.204127028s to wait for elevateKubeSystemPrivileges
	I1115 10:02:26.268077  636459 kubeadm.go:403] duration metric: took 17.769934383s to StartCluster
	I1115 10:02:26.268101  636459 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:26.268182  636459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:02:26.270069  636459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:26.332107  636459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:02:26.332130  636459 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:02:26.332210  636459 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:02:26.332312  636459 addons.go:70] Setting storage-provisioner=true in profile "kindnet-034018"
	I1115 10:02:26.332325  636459 config.go:182] Loaded profile config "kindnet-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:26.332339  636459 addons.go:70] Setting default-storageclass=true in profile "kindnet-034018"
	I1115 10:02:26.332365  636459 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-034018"
	I1115 10:02:26.332333  636459 addons.go:239] Setting addon storage-provisioner=true in "kindnet-034018"
	I1115 10:02:26.332434  636459 host.go:66] Checking if "kindnet-034018" exists ...
	I1115 10:02:26.332800  636459 cli_runner.go:164] Run: docker container inspect kindnet-034018 --format={{.State.Status}}
	I1115 10:02:26.332978  636459 cli_runner.go:164] Run: docker container inspect kindnet-034018 --format={{.State.Status}}
	I1115 10:02:26.474821  636459 addons.go:239] Setting addon default-storageclass=true in "kindnet-034018"
	I1115 10:02:26.474862  636459 host.go:66] Checking if "kindnet-034018" exists ...
	I1115 10:02:26.475175  636459 cli_runner.go:164] Run: docker container inspect kindnet-034018 --format={{.State.Status}}
	I1115 10:02:26.493787  636459 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:02:26.493809  636459 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:02:26.493863  636459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034018
	I1115 10:02:26.511654  636459 out.go:179] * Verifying Kubernetes components...
	I1115 10:02:26.512838  636459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/kindnet-034018/id_rsa Username:docker}
	I1115 10:02:26.620352  636459 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:02:22.778574  644840 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:02:22.778887  644840 start.go:159] libmachine.API.Create for "calico-034018" (driver="docker")
	I1115 10:02:22.778927  644840 client.go:173] LocalClient.Create starting
	I1115 10:02:22.779005  644840 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem
	I1115 10:02:22.779046  644840 main.go:143] libmachine: Decoding PEM data...
	I1115 10:02:22.779076  644840 main.go:143] libmachine: Parsing certificate...
	I1115 10:02:22.779136  644840 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem
	I1115 10:02:22.779161  644840 main.go:143] libmachine: Decoding PEM data...
	I1115 10:02:22.779171  644840 main.go:143] libmachine: Parsing certificate...
	I1115 10:02:22.779544  644840 cli_runner.go:164] Run: docker network inspect calico-034018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:02:22.797207  644840 cli_runner.go:211] docker network inspect calico-034018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:02:22.797285  644840 network_create.go:284] running [docker network inspect calico-034018] to gather additional debugging logs...
	I1115 10:02:22.797305  644840 cli_runner.go:164] Run: docker network inspect calico-034018
	W1115 10:02:22.814729  644840 cli_runner.go:211] docker network inspect calico-034018 returned with exit code 1
	I1115 10:02:22.814777  644840 network_create.go:287] error running [docker network inspect calico-034018]: docker network inspect calico-034018: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-034018 not found
	I1115 10:02:22.814802  644840 network_create.go:289] output of [docker network inspect calico-034018]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-034018 not found
	
	** /stderr **
	I1115 10:02:22.814977  644840 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:02:22.833874  644840 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a8fb985664d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:ab:70:dd:9f:65} reservation:<nil>}
	I1115 10:02:22.834954  644840 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cc9c79f9c19e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:9a:52:90:2e:14} reservation:<nil>}
	I1115 10:02:22.835596  644840 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-309565720ebf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:66:38:13:6a:5d} reservation:<nil>}
	I1115 10:02:22.836326  644840 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b5a35f2144e5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:aa:c4:ce:f8:c4} reservation:<nil>}
	I1115 10:02:22.837128  644840 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-0a7ab291fd7d IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c6:62:69:38:b2:19} reservation:<nil>}
	I1115 10:02:22.838139  644840 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e74230}
	I1115 10:02:22.838161  644840 network_create.go:124] attempt to create docker network calico-034018 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1115 10:02:22.838217  644840 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-034018 calico-034018
	I1115 10:02:22.890041  644840 network_create.go:108] docker network calico-034018 192.168.94.0/24 created
	I1115 10:02:22.890082  644840 kic.go:121] calculated static IP "192.168.94.2" for the "calico-034018" container
	I1115 10:02:22.890137  644840 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:02:22.910782  644840 cli_runner.go:164] Run: docker volume create calico-034018 --label name.minikube.sigs.k8s.io=calico-034018 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:02:22.929792  644840 oci.go:103] Successfully created a docker volume calico-034018
	I1115 10:02:22.929867  644840 cli_runner.go:164] Run: docker run --rm --name calico-034018-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-034018 --entrypoint /usr/bin/test -v calico-034018:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:02:23.330138  644840 oci.go:107] Successfully prepared a docker volume calico-034018
	I1115 10:02:23.330221  644840 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:02:23.330234  644840 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:02:23.330309  644840 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-034018:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:02:26.639863  636459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:02:26.654327  636459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:02:26.662933  636459 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:02:26.662964  636459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:02:26.663023  636459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034018
	I1115 10:02:26.696472  636459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/kindnet-034018/id_rsa Username:docker}
	I1115 10:02:26.715840  636459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:02:26.786240  636459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:02:26.810500  636459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:02:27.318607  636459 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1115 10:02:27.320161  636459 node_ready.go:35] waiting up to 15m0s for node "kindnet-034018" to be "Ready" ...
	I1115 10:02:27.826310  636459 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-034018" context rescaled to 1 replicas
	I1115 10:02:27.881017  636459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070473716s)
	I1115 10:02:27.885926  636459 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	
	
	==> CRI-O <==
	Nov 15 10:01:46 embed-certs-430513 crio[560]: time="2025-11-15T10:01:46.139663545Z" level=info msg="Started container" PID=1730 containerID=8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv/dashboard-metrics-scraper id=a2ca6aaa-1095-463a-847f-e312a09147d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d0206500793c2d8e8f28509b1b6d3bd8f0d01d3daed8750e287409c686303f34
	Nov 15 10:01:47 embed-certs-430513 crio[560]: time="2025-11-15T10:01:47.107781326Z" level=info msg="Removing container: dd9ca716680d123b8085ce160642af79a0875b6df02934db1fdb6cec62f708c4" id=b9bcbdaa-3a4d-4bc1-8ffc-573686ed0531 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:01:47 embed-certs-430513 crio[560]: time="2025-11-15T10:01:47.12017726Z" level=info msg="Removed container dd9ca716680d123b8085ce160642af79a0875b6df02934db1fdb6cec62f708c4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv/dashboard-metrics-scraper" id=b9bcbdaa-3a4d-4bc1-8ffc-573686ed0531 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.155554482Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b874dbd0-66d1-4937-ac48-96296ec5d380 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.15663803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d3568128-ea7d-49a1-b7ea-20b119e57608 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.157812332Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=82462e66-8ce3-400d-9fc2-9db9ff23dda2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.157961572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.163164264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.163410492Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c19e9f101f54c66de24565108bc11bbcb8f9c2157dace7df66bc3e6adfb898a6/merged/etc/passwd: no such file or directory"
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.163443693Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c19e9f101f54c66de24565108bc11bbcb8f9c2157dace7df66bc3e6adfb898a6/merged/etc/group: no such file or directory"
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.16375245Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.210779458Z" level=info msg="Created container 6a9e72cff791671c4897ad13ee17d7cde68ffcfd352cb407e01b44a4e1d21988: kube-system/storage-provisioner/storage-provisioner" id=82462e66-8ce3-400d-9fc2-9db9ff23dda2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.215768818Z" level=info msg="Starting container: 6a9e72cff791671c4897ad13ee17d7cde68ffcfd352cb407e01b44a4e1d21988" id=a935aa4f-8af1-454a-bccd-87c1a8162cdf name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.220127904Z" level=info msg="Started container" PID=1744 containerID=6a9e72cff791671c4897ad13ee17d7cde68ffcfd352cb407e01b44a4e1d21988 description=kube-system/storage-provisioner/storage-provisioner id=a935aa4f-8af1-454a-bccd-87c1a8162cdf name=/runtime.v1.RuntimeService/StartContainer sandboxID=543ccd7b83bbeec5e2a2f284ae6cfa0b11b3a3fbfe7af73340ed5def616b84d1
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.023625267Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2b8dacf5-1dc1-4931-b653-0bb5b722b2ae name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.024758179Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=46d36edb-7e8b-4f9f-a9ca-03d94750b52c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.02585344Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv/dashboard-metrics-scraper" id=86a12d70-3733-4ff9-9917-de45cca2c7b0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.025996893Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.032926747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.033508252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.071947271Z" level=info msg="Created container 50a34c1d55c4eea90f8947e70ee270cc75958a09a906d68196ca932a492dd969: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv/dashboard-metrics-scraper" id=86a12d70-3733-4ff9-9917-de45cca2c7b0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.072669322Z" level=info msg="Starting container: 50a34c1d55c4eea90f8947e70ee270cc75958a09a906d68196ca932a492dd969" id=2c46af62-6b19-4394-b93c-a345b46d39c1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.075132363Z" level=info msg="Started container" PID=1758 containerID=50a34c1d55c4eea90f8947e70ee270cc75958a09a906d68196ca932a492dd969 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv/dashboard-metrics-scraper id=2c46af62-6b19-4394-b93c-a345b46d39c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d0206500793c2d8e8f28509b1b6d3bd8f0d01d3daed8750e287409c686303f34
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.17169125Z" level=info msg="Removing container: 8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e" id=167acb3f-4b3a-4635-aa87-bd36b1cad514 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.183606459Z" level=info msg="Removed container 8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv/dashboard-metrics-scraper" id=167acb3f-4b3a-4635-aa87-bd36b1cad514 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	50a34c1d55c4e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   d0206500793c2       dashboard-metrics-scraper-6ffb444bf9-4msxv   kubernetes-dashboard
	6a9e72cff7916       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   543ccd7b83bbe       storage-provisioner                          kube-system
	e9065e2f0ff84       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   94e4a51da5d58       kubernetes-dashboard-855c9754f9-9dvs6        kubernetes-dashboard
	cbdca37a79bed       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   2e075579da3a5       coredns-66bc5c9577-6gvgh                     kube-system
	f1ce6f672a761       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   454f05d4d0f83       busybox                                      default
	23f964d7a59fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   543ccd7b83bbe       storage-provisioner                          kube-system
	ce8bf7d712ce4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   d7ad572a4c7fe       kindnet-h26k6                                kube-system
	4caca61ffa84f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   9c248c6de7c0d       kube-proxy-kd7wd                             kube-system
	7884e9381d1df       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   daafac7c33a1d       kube-controller-manager-embed-certs-430513   kube-system
	5fecf1854c34c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   12b4ab1ba2337       kube-scheduler-embed-certs-430513            kube-system
	edbf223b01e79       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   459a6ad3053d9       kube-apiserver-embed-certs-430513            kube-system
	aa074b2293679       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   2c4d45bf95ce8       etcd-embed-certs-430513                      kube-system
	
	
	==> coredns [cbdca37a79bed3f5268c2953f014f1f188b62728299fc5e94dc49e84105b8781] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43410 - 37722 "HINFO IN 2948087706110210287.8991388019833740165. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.150234474s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-430513
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-430513
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=embed-certs-430513
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_00_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:00:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-430513
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:02:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:02:15 +0000   Sat, 15 Nov 2025 10:00:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:02:15 +0000   Sat, 15 Nov 2025 10:00:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:02:15 +0000   Sat, 15 Nov 2025 10:00:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:02:15 +0000   Sat, 15 Nov 2025 10:00:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-430513
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                5e71a89d-4318-4931-9ea5-663742f9579f
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-6gvgh                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-embed-certs-430513                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-h26k6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-430513             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-430513    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-kd7wd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-430513             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4msxv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9dvs6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 53s                  kube-proxy       
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node embed-certs-430513 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node embed-certs-430513 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)  kubelet          Node embed-certs-430513 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node embed-certs-430513 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node embed-certs-430513 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node embed-certs-430513 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node embed-certs-430513 event: Registered Node embed-certs-430513 in Controller
	  Normal  NodeReady                97s                  kubelet          Node embed-certs-430513 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node embed-certs-430513 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node embed-certs-430513 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node embed-certs-430513 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                  node-controller  Node embed-certs-430513 event: Registered Node embed-certs-430513 in Controller
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [aa074b22936792966ead83faadae096faa591efe77ef77f4c0e0ec3344f4e2e9] <==
	{"level":"warn","ts":"2025-11-15T10:01:33.881457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.899882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.907856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.919599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.926236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.934044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.941566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.949869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.960074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.967607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.975166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.981444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.990056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.996974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.007207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.015295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.022153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.029743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.045520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.054477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.066916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.075558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.084742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.139455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33858","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T10:02:25.816975Z","caller":"traceutil/trace.go:172","msg":"trace[1761666011] transaction","detail":"{read_only:false; response_revision:673; number_of_response:1; }","duration":"111.869324ms","start":"2025-11-15T10:02:25.705086Z","end":"2025-11-15T10:02:25.816956Z","steps":["trace[1761666011] 'process raft request'  (duration: 111.746788ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:02:29 up  1:44,  0 user,  load average: 5.87, 3.66, 2.24
	Linux embed-certs-430513 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ce8bf7d712ce411f45bad7e0da6cda07264c3abd84c422e256c119681f884ced] <==
	I1115 10:01:35.529112       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:01:35.529338       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:01:35.529534       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:01:35.529552       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:01:35.529569       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:01:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:01:35.828933       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:01:35.830071       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:01:35.922929       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:01:35.923647       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:01:36.124732       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:01:36.124767       1 metrics.go:72] Registering metrics
	I1115 10:01:36.124835       1 controller.go:711] "Syncing nftables rules"
	I1115 10:01:45.828545       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:01:45.828639       1 main.go:301] handling current node
	I1115 10:01:55.832540       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:01:55.832591       1 main.go:301] handling current node
	I1115 10:02:05.828582       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:02:05.828611       1 main.go:301] handling current node
	I1115 10:02:15.830509       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:02:15.830561       1 main.go:301] handling current node
	I1115 10:02:25.837347       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:02:25.837384       1 main.go:301] handling current node
	
	
	==> kube-apiserver [edbf223b01e791d146a5f2ad465d24c0a6d60f196e80f447883f5851e9f2a5af] <==
	I1115 10:01:34.720286       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:01:34.720520       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:01:34.720520       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:01:34.721297       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:01:34.721322       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:01:34.721892       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 10:01:34.718489       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:01:34.726001       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:01:34.726070       1 policy_source.go:240] refreshing policies
	I1115 10:01:34.727088       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1115 10:01:34.729766       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:01:34.775333       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:01:34.778321       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:01:35.087824       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:01:35.100455       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:01:35.128428       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:01:35.151429       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:01:35.160024       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:01:35.215316       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.38.186"}
	I1115 10:01:35.228748       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.33.61"}
	I1115 10:01:35.622281       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:01:38.353258       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:01:38.353312       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:01:38.454219       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:01:38.504468       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7884e9381d1df9759c7a3893af1cf75c8acb92edff2489e9e07e1d1d4102b7df] <==
	I1115 10:01:38.033426       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:01:38.035659       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:01:38.051952       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:01:38.051980       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:01:38.051980       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:01:38.052002       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 10:01:38.052000       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:01:38.052012       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:01:38.053621       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:01:38.053711       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:01:38.055287       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:01:38.056427       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 10:01:38.056757       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:01:38.057582       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:01:38.057608       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:01:38.057695       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:01:38.059821       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:01:38.061049       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 10:01:38.061165       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:01:38.065420       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:01:38.065440       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:01:38.065447       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:01:38.068073       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:01:38.077404       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:01:38.087631       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	
	
	==> kube-proxy [4caca61ffa84fa2ac3d3a7a94231508ee36b8ef8047706a8c4c1af15b3e8e74f] <==
	I1115 10:01:35.424877       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:01:35.495535       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:01:35.596037       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:01:35.596125       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:01:35.596257       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:01:35.617344       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:01:35.617416       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:01:35.624530       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:01:35.625416       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:01:35.625433       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:01:35.627197       1 config.go:200] "Starting service config controller"
	I1115 10:01:35.627228       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:01:35.627423       1 config.go:309] "Starting node config controller"
	I1115 10:01:35.627729       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:01:35.627801       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:01:35.628089       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:01:35.628163       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:01:35.628313       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:01:35.628342       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:01:35.727905       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:01:35.728300       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:01:35.728877       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5fecf1854c34c29514b1ec6c6221755aeaa0b46dbd1e7d27edaf9fa5c71f7871] <==
	I1115 10:01:33.596799       1 serving.go:386] Generated self-signed cert in-memory
	I1115 10:01:34.757920       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:01:34.758018       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:01:34.763483       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:01:34.763606       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:01:34.763660       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:01:34.763684       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:01:34.763719       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:01:34.763567       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:01:34.765934       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 10:01:34.765994       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 10:01:34.864496       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:01:34.864524       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:01:34.869806       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 15 10:01:38 embed-certs-430513 kubelet[719]: I1115 10:01:38.722221     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0969e69a-a9ba-4971-9bdb-640845c9f45d-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-9dvs6\" (UID: \"0969e69a-a9ba-4971-9bdb-640845c9f45d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dvs6"
	Nov 15 10:01:38 embed-certs-430513 kubelet[719]: I1115 10:01:38.722295     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5gqv\" (UniqueName: \"kubernetes.io/projected/0969e69a-a9ba-4971-9bdb-640845c9f45d-kube-api-access-x5gqv\") pod \"kubernetes-dashboard-855c9754f9-9dvs6\" (UID: \"0969e69a-a9ba-4971-9bdb-640845c9f45d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dvs6"
	Nov 15 10:01:38 embed-certs-430513 kubelet[719]: I1115 10:01:38.722323     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d91a212a-9dfd-4045-8cc8-e448d6c84ff8-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4msxv\" (UID: \"d91a212a-9dfd-4045-8cc8-e448d6c84ff8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv"
	Nov 15 10:01:38 embed-certs-430513 kubelet[719]: I1115 10:01:38.722375     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbjcn\" (UniqueName: \"kubernetes.io/projected/d91a212a-9dfd-4045-8cc8-e448d6c84ff8-kube-api-access-sbjcn\") pod \"dashboard-metrics-scraper-6ffb444bf9-4msxv\" (UID: \"d91a212a-9dfd-4045-8cc8-e448d6c84ff8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv"
	Nov 15 10:01:42 embed-certs-430513 kubelet[719]: I1115 10:01:42.568101     719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 15 10:01:43 embed-certs-430513 kubelet[719]: I1115 10:01:43.102319     719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dvs6" podStartSLOduration=1.404163342 podStartE2EDuration="5.102293887s" podCreationTimestamp="2025-11-15 10:01:38 +0000 UTC" firstStartedPulling="2025-11-15 10:01:38.899035025 +0000 UTC m=+6.969032327" lastFinishedPulling="2025-11-15 10:01:42.597165562 +0000 UTC m=+10.667162872" observedRunningTime="2025-11-15 10:01:43.101692169 +0000 UTC m=+11.171689508" watchObservedRunningTime="2025-11-15 10:01:43.102293887 +0000 UTC m=+11.172291204"
	Nov 15 10:01:46 embed-certs-430513 kubelet[719]: I1115 10:01:46.095101     719 scope.go:117] "RemoveContainer" containerID="dd9ca716680d123b8085ce160642af79a0875b6df02934db1fdb6cec62f708c4"
	Nov 15 10:01:47 embed-certs-430513 kubelet[719]: I1115 10:01:47.102977     719 scope.go:117] "RemoveContainer" containerID="dd9ca716680d123b8085ce160642af79a0875b6df02934db1fdb6cec62f708c4"
	Nov 15 10:01:47 embed-certs-430513 kubelet[719]: I1115 10:01:47.103257     719 scope.go:117] "RemoveContainer" containerID="8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e"
	Nov 15 10:01:47 embed-certs-430513 kubelet[719]: E1115 10:01:47.104632     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4msxv_kubernetes-dashboard(d91a212a-9dfd-4045-8cc8-e448d6c84ff8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv" podUID="d91a212a-9dfd-4045-8cc8-e448d6c84ff8"
	Nov 15 10:01:48 embed-certs-430513 kubelet[719]: I1115 10:01:48.107025     719 scope.go:117] "RemoveContainer" containerID="8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e"
	Nov 15 10:01:48 embed-certs-430513 kubelet[719]: E1115 10:01:48.107230     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4msxv_kubernetes-dashboard(d91a212a-9dfd-4045-8cc8-e448d6c84ff8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv" podUID="d91a212a-9dfd-4045-8cc8-e448d6c84ff8"
	Nov 15 10:01:56 embed-certs-430513 kubelet[719]: I1115 10:01:56.021139     719 scope.go:117] "RemoveContainer" containerID="8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e"
	Nov 15 10:01:56 embed-certs-430513 kubelet[719]: E1115 10:01:56.021448     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4msxv_kubernetes-dashboard(d91a212a-9dfd-4045-8cc8-e448d6c84ff8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv" podUID="d91a212a-9dfd-4045-8cc8-e448d6c84ff8"
	Nov 15 10:02:06 embed-certs-430513 kubelet[719]: I1115 10:02:06.155012     719 scope.go:117] "RemoveContainer" containerID="23f964d7a59fbb004c1029367966897a137f95d25128e0a59e80531fb4a8877e"
	Nov 15 10:02:09 embed-certs-430513 kubelet[719]: I1115 10:02:09.023074     719 scope.go:117] "RemoveContainer" containerID="8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e"
	Nov 15 10:02:09 embed-certs-430513 kubelet[719]: I1115 10:02:09.169929     719 scope.go:117] "RemoveContainer" containerID="8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e"
	Nov 15 10:02:09 embed-certs-430513 kubelet[719]: I1115 10:02:09.170288     719 scope.go:117] "RemoveContainer" containerID="50a34c1d55c4eea90f8947e70ee270cc75958a09a906d68196ca932a492dd969"
	Nov 15 10:02:09 embed-certs-430513 kubelet[719]: E1115 10:02:09.170513     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4msxv_kubernetes-dashboard(d91a212a-9dfd-4045-8cc8-e448d6c84ff8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv" podUID="d91a212a-9dfd-4045-8cc8-e448d6c84ff8"
	Nov 15 10:02:16 embed-certs-430513 kubelet[719]: I1115 10:02:16.021250     719 scope.go:117] "RemoveContainer" containerID="50a34c1d55c4eea90f8947e70ee270cc75958a09a906d68196ca932a492dd969"
	Nov 15 10:02:16 embed-certs-430513 kubelet[719]: E1115 10:02:16.022364     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4msxv_kubernetes-dashboard(d91a212a-9dfd-4045-8cc8-e448d6c84ff8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv" podUID="d91a212a-9dfd-4045-8cc8-e448d6c84ff8"
	Nov 15 10:02:26 embed-certs-430513 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:02:26 embed-certs-430513 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:02:26 embed-certs-430513 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 15 10:02:26 embed-certs-430513 systemd[1]: kubelet.service: Consumed 1.801s CPU time.
	
	
	==> kubernetes-dashboard [e9065e2f0ff841f373976332fe044a958d99160f1ca62ef99f93d1a22174fdeb] <==
	2025/11/15 10:01:42 Using namespace: kubernetes-dashboard
	2025/11/15 10:01:42 Using in-cluster config to connect to apiserver
	2025/11/15 10:01:42 Using secret token for csrf signing
	2025/11/15 10:01:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:01:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:01:42 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:01:42 Generating JWE encryption key
	2025/11/15 10:01:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:01:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:01:43 Initializing JWE encryption key from synchronized object
	2025/11/15 10:01:43 Creating in-cluster Sidecar client
	2025/11/15 10:01:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:01:43 Serving insecurely on HTTP port: 9090
	2025/11/15 10:02:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:01:42 Starting overwatch
	
	
	==> storage-provisioner [23f964d7a59fbb004c1029367966897a137f95d25128e0a59e80531fb4a8877e] <==
	I1115 10:01:35.397646       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:02:05.399968       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6a9e72cff791671c4897ad13ee17d7cde68ffcfd352cb407e01b44a4e1d21988] <==
	I1115 10:02:06.243866       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:02:06.280546       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:02:06.280612       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:02:06.288541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:09.745648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:14.006923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:17.607369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:20.660919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:23.684201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:23.689369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:02:23.689560       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:02:23.689747       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-430513_cc39e367-3afd-4738-bb9b-fc1a6cc09f16!
	I1115 10:02:23.689697       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd082bdf-d760-43e6-b6b6-335a4fbc7891", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-430513_cc39e367-3afd-4738-bb9b-fc1a6cc09f16 became leader
	W1115 10:02:23.693178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:23.698130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:02:23.790091       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-430513_cc39e367-3afd-4738-bb9b-fc1a6cc09f16!
	W1115 10:02:25.701939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:25.817989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:27.822905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:27.829101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-430513 -n embed-certs-430513
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-430513 -n embed-certs-430513: exit status 2 (353.684707ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-430513 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-430513
helpers_test.go:243: (dbg) docker inspect embed-certs-430513:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307",
	        "Created": "2025-11-15T10:00:21.0128724Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 626068,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:01:25.448663378Z",
	            "FinishedAt": "2025-11-15T10:01:24.380209125Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307/hosts",
	        "LogPath": "/var/lib/docker/containers/0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307/0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307-json.log",
	        "Name": "/embed-certs-430513",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-430513:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-430513",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d1528353148448e5cb0ff5642c33b2975aef36967f2cbbbe7f5d58e373ab307",
	                "LowerDir": "/var/lib/docker/overlay2/076ef13396d6f2f2b6cb3a382a4ea2c5e0a16b7306168cd425e3d6324e5d05af-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/076ef13396d6f2f2b6cb3a382a4ea2c5e0a16b7306168cd425e3d6324e5d05af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/076ef13396d6f2f2b6cb3a382a4ea2c5e0a16b7306168cd425e3d6324e5d05af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/076ef13396d6f2f2b6cb3a382a4ea2c5e0a16b7306168cd425e3d6324e5d05af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-430513",
	                "Source": "/var/lib/docker/volumes/embed-certs-430513/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-430513",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-430513",
	                "name.minikube.sigs.k8s.io": "embed-certs-430513",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cfe8adb7aca3e307ebf87e35fe3034216bb9a48a3f6c02b3637dc26344d5ffa9",
	            "SandboxKey": "/var/run/docker/netns/cfe8adb7aca3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-430513": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b5a35f2144e5ffd9ac7511406e9418188a3c5784e35110b679aaeaa5b02f5ee9",
	                    "EndpointID": "5b65da5414305a2d45dd0df0ad496187f3de55f236f0ed465d498f822f0164c8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "26:9a:3d:ac:b8:fc",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-430513",
	                        "0d1528353148"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-430513 -n embed-certs-430513
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-430513 -n embed-certs-430513: exit status 2 (336.090273ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-430513 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-430513 logs -n 25: (1.180145484s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-034018 sudo journalctl -xeu kubelet --all --full --no-pager                                                                    │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo systemctl status docker --all --full --no-pager                                                                    │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo systemctl cat docker --no-pager                                                                                    │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /etc/docker/daemon.json                                                                                        │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo docker system info                                                                                                 │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo systemctl status cri-docker --all --full --no-pager                                                                │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo systemctl cat cri-docker --no-pager                                                                                │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cri-dockerd --version                                                                                              │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo systemctl status containerd --all --full --no-pager                                                                │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo systemctl cat containerd --no-pager                                                                                │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /lib/systemd/system/containerd.service                                                                         │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /etc/containerd/config.toml                                                                                    │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo containerd config dump                                                                                             │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo systemctl status crio --all --full --no-pager                                                                      │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo systemctl cat crio --no-pager                                                                                      │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo crio config                                                                                                        │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ delete  │ -p auto-034018                                                                                                                         │ auto-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ start   │ -p calico-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-034018      │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ image   │ embed-certs-430513 image list --format=json                                                                                            │ embed-certs-430513 │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ pause   │ -p embed-certs-430513 --alsologtostderr -v=1                                                                                           │ embed-certs-430513 │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:02:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:02:22.572627  644840 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:02:22.572892  644840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:22.572903  644840 out.go:374] Setting ErrFile to fd 2...
	I1115 10:02:22.572907  644840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:22.573104  644840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:02:22.573610  644840 out.go:368] Setting JSON to false
	I1115 10:02:22.574937  644840 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6284,"bootTime":1763194659,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:02:22.575029  644840 start.go:143] virtualization: kvm guest
	I1115 10:02:22.577244  644840 out.go:179] * [calico-034018] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:02:22.578510  644840 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:02:22.578551  644840 notify.go:221] Checking for updates...
	I1115 10:02:22.580947  644840 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:02:22.582252  644840 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:02:22.583546  644840 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 10:02:22.584811  644840 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:02:22.586184  644840 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:02:22.588005  644840 config.go:182] Loaded profile config "default-k8s-diff-port-679865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:22.588113  644840 config.go:182] Loaded profile config "embed-certs-430513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:22.588225  644840 config.go:182] Loaded profile config "kindnet-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:22.588346  644840 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:02:22.613234  644840 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:02:22.613340  644840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:02:22.675328  644840 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-15 10:02:22.663748723 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:02:22.675534  644840 docker.go:319] overlay module found
	I1115 10:02:22.677472  644840 out.go:179] * Using the docker driver based on user configuration
	I1115 10:02:22.678650  644840 start.go:309] selected driver: docker
	I1115 10:02:22.678667  644840 start.go:930] validating driver "docker" against <nil>
	I1115 10:02:22.678679  644840 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:02:22.679261  644840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:02:22.745630  644840 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-15 10:02:22.736035397 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:02:22.745779  644840 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:02:22.745973  644840 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:02:22.747887  644840 out.go:179] * Using Docker driver with root privileges
	I1115 10:02:22.749150  644840 cni.go:84] Creating CNI manager for "calico"
	I1115 10:02:22.749176  644840 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1115 10:02:22.749291  644840 start.go:353] cluster config:
	{Name:calico-034018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-034018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:02:22.750764  644840 out.go:179] * Starting "calico-034018" primary control-plane node in "calico-034018" cluster
	I1115 10:02:22.751940  644840 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:02:22.753095  644840 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:02:22.754307  644840 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:02:22.754345  644840 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:02:22.754359  644840 cache.go:65] Caching tarball of preloaded images
	I1115 10:02:22.754383  644840 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:02:22.754483  644840 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:02:22.754498  644840 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:02:22.754592  644840 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/calico-034018/config.json ...
	I1115 10:02:22.754613  644840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/calico-034018/config.json: {Name:mk1e647214f00a9b9d4fa1d08f640554ac317c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:22.776136  644840 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:02:22.776169  644840 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:02:22.776192  644840 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:02:22.776223  644840 start.go:360] acquireMachinesLock for calico-034018: {Name:mk2832fa6a8a4c61196c221c11c833ad8a48bbe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:02:22.776343  644840 start.go:364] duration metric: took 98.254µs to acquireMachinesLock for "calico-034018"
	I1115 10:02:22.776373  644840 start.go:93] Provisioning new machine with config: &{Name:calico-034018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-034018 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:02:22.776488  644840 start.go:125] createHost starting for "" (driver="docker")
	I1115 10:02:21.770687  636459 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1115 10:02:21.775953  636459 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:02:21.775973  636459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1115 10:02:21.790080  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:02:22.063883  636459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:02:22.064056  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-034018 minikube.k8s.io/updated_at=2025_11_15T10_02_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=kindnet-034018 minikube.k8s.io/primary=true
	I1115 10:02:22.064186  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:22.076061  636459 ops.go:34] apiserver oom_adj: -16
	I1115 10:02:22.156857  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:22.657841  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:23.157877  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:23.657099  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1115 10:02:22.214550  635342 pod_ready.go:104] pod "coredns-66bc5c9577-wknnh" is not "Ready", error: <nil>
	W1115 10:02:24.216479  635342 pod_ready.go:104] pod "coredns-66bc5c9577-wknnh" is not "Ready", error: <nil>
	I1115 10:02:24.157608  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:24.657831  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:25.157800  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:25.657561  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:26.157480  636459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:26.268041  636459 kubeadm.go:1114] duration metric: took 4.204127028s to wait for elevateKubeSystemPrivileges
	I1115 10:02:26.268077  636459 kubeadm.go:403] duration metric: took 17.769934383s to StartCluster
	I1115 10:02:26.268101  636459 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:26.268182  636459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:02:26.270069  636459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:26.332107  636459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:02:26.332130  636459 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:02:26.332210  636459 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:02:26.332312  636459 addons.go:70] Setting storage-provisioner=true in profile "kindnet-034018"
	I1115 10:02:26.332325  636459 config.go:182] Loaded profile config "kindnet-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:26.332339  636459 addons.go:70] Setting default-storageclass=true in profile "kindnet-034018"
	I1115 10:02:26.332365  636459 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-034018"
	I1115 10:02:26.332333  636459 addons.go:239] Setting addon storage-provisioner=true in "kindnet-034018"
	I1115 10:02:26.332434  636459 host.go:66] Checking if "kindnet-034018" exists ...
	I1115 10:02:26.332800  636459 cli_runner.go:164] Run: docker container inspect kindnet-034018 --format={{.State.Status}}
	I1115 10:02:26.332978  636459 cli_runner.go:164] Run: docker container inspect kindnet-034018 --format={{.State.Status}}
	I1115 10:02:26.474821  636459 addons.go:239] Setting addon default-storageclass=true in "kindnet-034018"
	I1115 10:02:26.474862  636459 host.go:66] Checking if "kindnet-034018" exists ...
	I1115 10:02:26.475175  636459 cli_runner.go:164] Run: docker container inspect kindnet-034018 --format={{.State.Status}}
	I1115 10:02:26.493787  636459 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:02:26.493809  636459 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:02:26.493863  636459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034018
	I1115 10:02:26.511654  636459 out.go:179] * Verifying Kubernetes components...
	I1115 10:02:26.512838  636459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/kindnet-034018/id_rsa Username:docker}
	I1115 10:02:26.620352  636459 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:02:22.778574  644840 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:02:22.778887  644840 start.go:159] libmachine.API.Create for "calico-034018" (driver="docker")
	I1115 10:02:22.778927  644840 client.go:173] LocalClient.Create starting
	I1115 10:02:22.779005  644840 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem
	I1115 10:02:22.779046  644840 main.go:143] libmachine: Decoding PEM data...
	I1115 10:02:22.779076  644840 main.go:143] libmachine: Parsing certificate...
	I1115 10:02:22.779136  644840 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem
	I1115 10:02:22.779161  644840 main.go:143] libmachine: Decoding PEM data...
	I1115 10:02:22.779171  644840 main.go:143] libmachine: Parsing certificate...
	I1115 10:02:22.779544  644840 cli_runner.go:164] Run: docker network inspect calico-034018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:02:22.797207  644840 cli_runner.go:211] docker network inspect calico-034018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:02:22.797285  644840 network_create.go:284] running [docker network inspect calico-034018] to gather additional debugging logs...
	I1115 10:02:22.797305  644840 cli_runner.go:164] Run: docker network inspect calico-034018
	W1115 10:02:22.814729  644840 cli_runner.go:211] docker network inspect calico-034018 returned with exit code 1
	I1115 10:02:22.814777  644840 network_create.go:287] error running [docker network inspect calico-034018]: docker network inspect calico-034018: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-034018 not found
	I1115 10:02:22.814802  644840 network_create.go:289] output of [docker network inspect calico-034018]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-034018 not found
	
	** /stderr **
	I1115 10:02:22.814977  644840 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:02:22.833874  644840 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a8fb985664d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:ab:70:dd:9f:65} reservation:<nil>}
	I1115 10:02:22.834954  644840 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cc9c79f9c19e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:9a:52:90:2e:14} reservation:<nil>}
	I1115 10:02:22.835596  644840 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-309565720ebf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:66:38:13:6a:5d} reservation:<nil>}
	I1115 10:02:22.836326  644840 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b5a35f2144e5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:aa:c4:ce:f8:c4} reservation:<nil>}
	I1115 10:02:22.837128  644840 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-0a7ab291fd7d IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c6:62:69:38:b2:19} reservation:<nil>}
	I1115 10:02:22.838139  644840 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e74230}
	I1115 10:02:22.838161  644840 network_create.go:124] attempt to create docker network calico-034018 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1115 10:02:22.838217  644840 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-034018 calico-034018
	I1115 10:02:22.890041  644840 network_create.go:108] docker network calico-034018 192.168.94.0/24 created
	I1115 10:02:22.890082  644840 kic.go:121] calculated static IP "192.168.94.2" for the "calico-034018" container
	I1115 10:02:22.890137  644840 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:02:22.910782  644840 cli_runner.go:164] Run: docker volume create calico-034018 --label name.minikube.sigs.k8s.io=calico-034018 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:02:22.929792  644840 oci.go:103] Successfully created a docker volume calico-034018
	I1115 10:02:22.929867  644840 cli_runner.go:164] Run: docker run --rm --name calico-034018-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-034018 --entrypoint /usr/bin/test -v calico-034018:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:02:23.330138  644840 oci.go:107] Successfully prepared a docker volume calico-034018
	I1115 10:02:23.330221  644840 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:02:23.330234  644840 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:02:23.330309  644840 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-034018:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1115 10:02:26.639863  636459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:02:26.654327  636459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:02:26.662933  636459 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:02:26.662964  636459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:02:26.663023  636459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-034018
	I1115 10:02:26.696472  636459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33484 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/kindnet-034018/id_rsa Username:docker}
	I1115 10:02:26.715840  636459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:02:26.786240  636459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:02:26.810500  636459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:02:27.318607  636459 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1115 10:02:27.320161  636459 node_ready.go:35] waiting up to 15m0s for node "kindnet-034018" to be "Ready" ...
	I1115 10:02:27.826310  636459 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-034018" context rescaled to 1 replicas
	I1115 10:02:27.881017  636459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070473716s)
	I1115 10:02:27.885926  636459 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1115 10:02:27.887180  636459 addons.go:515] duration metric: took 1.554966461s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1115 10:02:26.715684  635342 pod_ready.go:104] pod "coredns-66bc5c9577-wknnh" is not "Ready", error: <nil>
	W1115 10:02:28.716653  635342 pod_ready.go:104] pod "coredns-66bc5c9577-wknnh" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 15 10:01:46 embed-certs-430513 crio[560]: time="2025-11-15T10:01:46.139663545Z" level=info msg="Started container" PID=1730 containerID=8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv/dashboard-metrics-scraper id=a2ca6aaa-1095-463a-847f-e312a09147d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d0206500793c2d8e8f28509b1b6d3bd8f0d01d3daed8750e287409c686303f34
	Nov 15 10:01:47 embed-certs-430513 crio[560]: time="2025-11-15T10:01:47.107781326Z" level=info msg="Removing container: dd9ca716680d123b8085ce160642af79a0875b6df02934db1fdb6cec62f708c4" id=b9bcbdaa-3a4d-4bc1-8ffc-573686ed0531 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:01:47 embed-certs-430513 crio[560]: time="2025-11-15T10:01:47.12017726Z" level=info msg="Removed container dd9ca716680d123b8085ce160642af79a0875b6df02934db1fdb6cec62f708c4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv/dashboard-metrics-scraper" id=b9bcbdaa-3a4d-4bc1-8ffc-573686ed0531 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.155554482Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b874dbd0-66d1-4937-ac48-96296ec5d380 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.15663803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d3568128-ea7d-49a1-b7ea-20b119e57608 name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.157812332Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=82462e66-8ce3-400d-9fc2-9db9ff23dda2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.157961572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.163164264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.163410492Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c19e9f101f54c66de24565108bc11bbcb8f9c2157dace7df66bc3e6adfb898a6/merged/etc/passwd: no such file or directory"
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.163443693Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c19e9f101f54c66de24565108bc11bbcb8f9c2157dace7df66bc3e6adfb898a6/merged/etc/group: no such file or directory"
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.16375245Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.210779458Z" level=info msg="Created container 6a9e72cff791671c4897ad13ee17d7cde68ffcfd352cb407e01b44a4e1d21988: kube-system/storage-provisioner/storage-provisioner" id=82462e66-8ce3-400d-9fc2-9db9ff23dda2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.215768818Z" level=info msg="Starting container: 6a9e72cff791671c4897ad13ee17d7cde68ffcfd352cb407e01b44a4e1d21988" id=a935aa4f-8af1-454a-bccd-87c1a8162cdf name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:02:06 embed-certs-430513 crio[560]: time="2025-11-15T10:02:06.220127904Z" level=info msg="Started container" PID=1744 containerID=6a9e72cff791671c4897ad13ee17d7cde68ffcfd352cb407e01b44a4e1d21988 description=kube-system/storage-provisioner/storage-provisioner id=a935aa4f-8af1-454a-bccd-87c1a8162cdf name=/runtime.v1.RuntimeService/StartContainer sandboxID=543ccd7b83bbeec5e2a2f284ae6cfa0b11b3a3fbfe7af73340ed5def616b84d1
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.023625267Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2b8dacf5-1dc1-4931-b653-0bb5b722b2ae name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.024758179Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=46d36edb-7e8b-4f9f-a9ca-03d94750b52c name=/runtime.v1.ImageService/ImageStatus
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.02585344Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv/dashboard-metrics-scraper" id=86a12d70-3733-4ff9-9917-de45cca2c7b0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.025996893Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.032926747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.033508252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.071947271Z" level=info msg="Created container 50a34c1d55c4eea90f8947e70ee270cc75958a09a906d68196ca932a492dd969: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv/dashboard-metrics-scraper" id=86a12d70-3733-4ff9-9917-de45cca2c7b0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.072669322Z" level=info msg="Starting container: 50a34c1d55c4eea90f8947e70ee270cc75958a09a906d68196ca932a492dd969" id=2c46af62-6b19-4394-b93c-a345b46d39c1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.075132363Z" level=info msg="Started container" PID=1758 containerID=50a34c1d55c4eea90f8947e70ee270cc75958a09a906d68196ca932a492dd969 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv/dashboard-metrics-scraper id=2c46af62-6b19-4394-b93c-a345b46d39c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d0206500793c2d8e8f28509b1b6d3bd8f0d01d3daed8750e287409c686303f34
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.17169125Z" level=info msg="Removing container: 8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e" id=167acb3f-4b3a-4635-aa87-bd36b1cad514 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 15 10:02:09 embed-certs-430513 crio[560]: time="2025-11-15T10:02:09.183606459Z" level=info msg="Removed container 8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv/dashboard-metrics-scraper" id=167acb3f-4b3a-4635-aa87-bd36b1cad514 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	50a34c1d55c4e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   d0206500793c2       dashboard-metrics-scraper-6ffb444bf9-4msxv   kubernetes-dashboard
	6a9e72cff7916       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   543ccd7b83bbe       storage-provisioner                          kube-system
	e9065e2f0ff84       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   94e4a51da5d58       kubernetes-dashboard-855c9754f9-9dvs6        kubernetes-dashboard
	cbdca37a79bed       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   2e075579da3a5       coredns-66bc5c9577-6gvgh                     kube-system
	f1ce6f672a761       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   454f05d4d0f83       busybox                                      default
	23f964d7a59fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   543ccd7b83bbe       storage-provisioner                          kube-system
	ce8bf7d712ce4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   d7ad572a4c7fe       kindnet-h26k6                                kube-system
	4caca61ffa84f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   9c248c6de7c0d       kube-proxy-kd7wd                             kube-system
	7884e9381d1df       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   daafac7c33a1d       kube-controller-manager-embed-certs-430513   kube-system
	5fecf1854c34c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   12b4ab1ba2337       kube-scheduler-embed-certs-430513            kube-system
	edbf223b01e79       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   459a6ad3053d9       kube-apiserver-embed-certs-430513            kube-system
	aa074b2293679       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   2c4d45bf95ce8       etcd-embed-certs-430513                      kube-system
	
	
	==> coredns [cbdca37a79bed3f5268c2953f014f1f188b62728299fc5e94dc49e84105b8781] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43410 - 37722 "HINFO IN 2948087706110210287.8991388019833740165. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.150234474s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-430513
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-430513
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=embed-certs-430513
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_00_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:00:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-430513
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:02:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:02:15 +0000   Sat, 15 Nov 2025 10:00:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:02:15 +0000   Sat, 15 Nov 2025 10:00:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:02:15 +0000   Sat, 15 Nov 2025 10:00:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:02:15 +0000   Sat, 15 Nov 2025 10:00:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-430513
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                5e71a89d-4318-4931-9ea5-663742f9579f
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-6gvgh                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-embed-certs-430513                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-h26k6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-embed-certs-430513             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-430513    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-kd7wd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-embed-certs-430513             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4msxv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9dvs6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node embed-certs-430513 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node embed-certs-430513 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node embed-certs-430513 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node embed-certs-430513 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node embed-certs-430513 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     115s               kubelet          Node embed-certs-430513 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node embed-certs-430513 event: Registered Node embed-certs-430513 in Controller
	  Normal  NodeReady                99s                kubelet          Node embed-certs-430513 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node embed-certs-430513 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node embed-certs-430513 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node embed-certs-430513 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node embed-certs-430513 event: Registered Node embed-certs-430513 in Controller
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [aa074b22936792966ead83faadae096faa591efe77ef77f4c0e0ec3344f4e2e9] <==
	{"level":"warn","ts":"2025-11-15T10:01:33.881457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.899882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.907856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.919599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.926236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.934044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.941566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.949869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.960074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.967607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.975166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.981444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.990056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:33.996974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.007207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.015295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.022153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.029743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.045520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.054477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.066916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.075558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.084742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:01:34.139455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33858","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T10:02:25.816975Z","caller":"traceutil/trace.go:172","msg":"trace[1761666011] transaction","detail":"{read_only:false; response_revision:673; number_of_response:1; }","duration":"111.869324ms","start":"2025-11-15T10:02:25.705086Z","end":"2025-11-15T10:02:25.816956Z","steps":["trace[1761666011] 'process raft request'  (duration: 111.746788ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:02:31 up  1:44,  0 user,  load average: 5.48, 3.61, 2.23
	Linux embed-certs-430513 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ce8bf7d712ce411f45bad7e0da6cda07264c3abd84c422e256c119681f884ced] <==
	I1115 10:01:35.529112       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:01:35.529338       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1115 10:01:35.529534       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:01:35.529552       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:01:35.529569       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:01:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:01:35.828933       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:01:35.830071       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:01:35.922929       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:01:35.923647       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 10:01:36.124732       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:01:36.124767       1 metrics.go:72] Registering metrics
	I1115 10:01:36.124835       1 controller.go:711] "Syncing nftables rules"
	I1115 10:01:45.828545       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:01:45.828639       1 main.go:301] handling current node
	I1115 10:01:55.832540       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:01:55.832591       1 main.go:301] handling current node
	I1115 10:02:05.828582       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:02:05.828611       1 main.go:301] handling current node
	I1115 10:02:15.830509       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:02:15.830561       1 main.go:301] handling current node
	I1115 10:02:25.837347       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1115 10:02:25.837384       1 main.go:301] handling current node
	
	
	==> kube-apiserver [edbf223b01e791d146a5f2ad465d24c0a6d60f196e80f447883f5851e9f2a5af] <==
	I1115 10:01:34.720286       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:01:34.720520       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:01:34.720520       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:01:34.721297       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:01:34.721322       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:01:34.721892       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1115 10:01:34.718489       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:01:34.726001       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1115 10:01:34.726070       1 policy_source.go:240] refreshing policies
	I1115 10:01:34.727088       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1115 10:01:34.729766       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1115 10:01:34.775333       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1115 10:01:34.778321       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 10:01:35.087824       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:01:35.100455       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:01:35.128428       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:01:35.151429       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:01:35.160024       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:01:35.215316       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.38.186"}
	I1115 10:01:35.228748       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.33.61"}
	I1115 10:01:35.622281       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:01:38.353258       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:01:38.353312       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:01:38.454219       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:01:38.504468       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7884e9381d1df9759c7a3893af1cf75c8acb92edff2489e9e07e1d1d4102b7df] <==
	I1115 10:01:38.033426       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 10:01:38.035659       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:01:38.051952       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:01:38.051980       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 10:01:38.051980       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1115 10:01:38.052002       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 10:01:38.052000       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:01:38.052012       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:01:38.053621       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 10:01:38.053711       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:01:38.055287       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1115 10:01:38.056427       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 10:01:38.056757       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:01:38.057582       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:01:38.057608       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:01:38.057695       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1115 10:01:38.059821       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:01:38.061049       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1115 10:01:38.061165       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:01:38.065420       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:01:38.065440       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:01:38.065447       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1115 10:01:38.068073       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:01:38.077404       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:01:38.087631       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	
	
	==> kube-proxy [4caca61ffa84fa2ac3d3a7a94231508ee36b8ef8047706a8c4c1af15b3e8e74f] <==
	I1115 10:01:35.424877       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:01:35.495535       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:01:35.596037       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:01:35.596125       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1115 10:01:35.596257       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:01:35.617344       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:01:35.617416       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:01:35.624530       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:01:35.625416       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:01:35.625433       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:01:35.627197       1 config.go:200] "Starting service config controller"
	I1115 10:01:35.627228       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:01:35.627423       1 config.go:309] "Starting node config controller"
	I1115 10:01:35.627729       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:01:35.627801       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:01:35.628089       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:01:35.628163       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:01:35.628313       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:01:35.628342       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:01:35.727905       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:01:35.728300       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 10:01:35.728877       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5fecf1854c34c29514b1ec6c6221755aeaa0b46dbd1e7d27edaf9fa5c71f7871] <==
	I1115 10:01:33.596799       1 serving.go:386] Generated self-signed cert in-memory
	I1115 10:01:34.757920       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1115 10:01:34.758018       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:01:34.763483       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1115 10:01:34.763606       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:01:34.763660       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:01:34.763684       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:01:34.763719       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:01:34.763567       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:01:34.765934       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1115 10:01:34.765994       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1115 10:01:34.864496       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:01:34.864524       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1115 10:01:34.869806       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 15 10:01:38 embed-certs-430513 kubelet[719]: I1115 10:01:38.722221     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0969e69a-a9ba-4971-9bdb-640845c9f45d-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-9dvs6\" (UID: \"0969e69a-a9ba-4971-9bdb-640845c9f45d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dvs6"
	Nov 15 10:01:38 embed-certs-430513 kubelet[719]: I1115 10:01:38.722295     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5gqv\" (UniqueName: \"kubernetes.io/projected/0969e69a-a9ba-4971-9bdb-640845c9f45d-kube-api-access-x5gqv\") pod \"kubernetes-dashboard-855c9754f9-9dvs6\" (UID: \"0969e69a-a9ba-4971-9bdb-640845c9f45d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dvs6"
	Nov 15 10:01:38 embed-certs-430513 kubelet[719]: I1115 10:01:38.722323     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d91a212a-9dfd-4045-8cc8-e448d6c84ff8-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4msxv\" (UID: \"d91a212a-9dfd-4045-8cc8-e448d6c84ff8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv"
	Nov 15 10:01:38 embed-certs-430513 kubelet[719]: I1115 10:01:38.722375     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbjcn\" (UniqueName: \"kubernetes.io/projected/d91a212a-9dfd-4045-8cc8-e448d6c84ff8-kube-api-access-sbjcn\") pod \"dashboard-metrics-scraper-6ffb444bf9-4msxv\" (UID: \"d91a212a-9dfd-4045-8cc8-e448d6c84ff8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv"
	Nov 15 10:01:42 embed-certs-430513 kubelet[719]: I1115 10:01:42.568101     719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 15 10:01:43 embed-certs-430513 kubelet[719]: I1115 10:01:43.102319     719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9dvs6" podStartSLOduration=1.404163342 podStartE2EDuration="5.102293887s" podCreationTimestamp="2025-11-15 10:01:38 +0000 UTC" firstStartedPulling="2025-11-15 10:01:38.899035025 +0000 UTC m=+6.969032327" lastFinishedPulling="2025-11-15 10:01:42.597165562 +0000 UTC m=+10.667162872" observedRunningTime="2025-11-15 10:01:43.101692169 +0000 UTC m=+11.171689508" watchObservedRunningTime="2025-11-15 10:01:43.102293887 +0000 UTC m=+11.172291204"
	Nov 15 10:01:46 embed-certs-430513 kubelet[719]: I1115 10:01:46.095101     719 scope.go:117] "RemoveContainer" containerID="dd9ca716680d123b8085ce160642af79a0875b6df02934db1fdb6cec62f708c4"
	Nov 15 10:01:47 embed-certs-430513 kubelet[719]: I1115 10:01:47.102977     719 scope.go:117] "RemoveContainer" containerID="dd9ca716680d123b8085ce160642af79a0875b6df02934db1fdb6cec62f708c4"
	Nov 15 10:01:47 embed-certs-430513 kubelet[719]: I1115 10:01:47.103257     719 scope.go:117] "RemoveContainer" containerID="8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e"
	Nov 15 10:01:47 embed-certs-430513 kubelet[719]: E1115 10:01:47.104632     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4msxv_kubernetes-dashboard(d91a212a-9dfd-4045-8cc8-e448d6c84ff8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv" podUID="d91a212a-9dfd-4045-8cc8-e448d6c84ff8"
	Nov 15 10:01:48 embed-certs-430513 kubelet[719]: I1115 10:01:48.107025     719 scope.go:117] "RemoveContainer" containerID="8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e"
	Nov 15 10:01:48 embed-certs-430513 kubelet[719]: E1115 10:01:48.107230     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4msxv_kubernetes-dashboard(d91a212a-9dfd-4045-8cc8-e448d6c84ff8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv" podUID="d91a212a-9dfd-4045-8cc8-e448d6c84ff8"
	Nov 15 10:01:56 embed-certs-430513 kubelet[719]: I1115 10:01:56.021139     719 scope.go:117] "RemoveContainer" containerID="8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e"
	Nov 15 10:01:56 embed-certs-430513 kubelet[719]: E1115 10:01:56.021448     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4msxv_kubernetes-dashboard(d91a212a-9dfd-4045-8cc8-e448d6c84ff8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv" podUID="d91a212a-9dfd-4045-8cc8-e448d6c84ff8"
	Nov 15 10:02:06 embed-certs-430513 kubelet[719]: I1115 10:02:06.155012     719 scope.go:117] "RemoveContainer" containerID="23f964d7a59fbb004c1029367966897a137f95d25128e0a59e80531fb4a8877e"
	Nov 15 10:02:09 embed-certs-430513 kubelet[719]: I1115 10:02:09.023074     719 scope.go:117] "RemoveContainer" containerID="8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e"
	Nov 15 10:02:09 embed-certs-430513 kubelet[719]: I1115 10:02:09.169929     719 scope.go:117] "RemoveContainer" containerID="8eafc492888e803fbb86db9f1a70339ef4c6e6ac80d7d2fb9e467a7429de171e"
	Nov 15 10:02:09 embed-certs-430513 kubelet[719]: I1115 10:02:09.170288     719 scope.go:117] "RemoveContainer" containerID="50a34c1d55c4eea90f8947e70ee270cc75958a09a906d68196ca932a492dd969"
	Nov 15 10:02:09 embed-certs-430513 kubelet[719]: E1115 10:02:09.170513     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4msxv_kubernetes-dashboard(d91a212a-9dfd-4045-8cc8-e448d6c84ff8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv" podUID="d91a212a-9dfd-4045-8cc8-e448d6c84ff8"
	Nov 15 10:02:16 embed-certs-430513 kubelet[719]: I1115 10:02:16.021250     719 scope.go:117] "RemoveContainer" containerID="50a34c1d55c4eea90f8947e70ee270cc75958a09a906d68196ca932a492dd969"
	Nov 15 10:02:16 embed-certs-430513 kubelet[719]: E1115 10:02:16.022364     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4msxv_kubernetes-dashboard(d91a212a-9dfd-4045-8cc8-e448d6c84ff8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4msxv" podUID="d91a212a-9dfd-4045-8cc8-e448d6c84ff8"
	Nov 15 10:02:26 embed-certs-430513 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:02:26 embed-certs-430513 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:02:26 embed-certs-430513 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 15 10:02:26 embed-certs-430513 systemd[1]: kubelet.service: Consumed 1.801s CPU time.
	
	
	==> kubernetes-dashboard [e9065e2f0ff841f373976332fe044a958d99160f1ca62ef99f93d1a22174fdeb] <==
	2025/11/15 10:01:42 Using namespace: kubernetes-dashboard
	2025/11/15 10:01:42 Using in-cluster config to connect to apiserver
	2025/11/15 10:01:42 Using secret token for csrf signing
	2025/11/15 10:01:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:01:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:01:42 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:01:42 Generating JWE encryption key
	2025/11/15 10:01:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:01:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:01:43 Initializing JWE encryption key from synchronized object
	2025/11/15 10:01:43 Creating in-cluster Sidecar client
	2025/11/15 10:01:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:01:43 Serving insecurely on HTTP port: 9090
	2025/11/15 10:02:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:01:42 Starting overwatch
	
	
	==> storage-provisioner [23f964d7a59fbb004c1029367966897a137f95d25128e0a59e80531fb4a8877e] <==
	I1115 10:01:35.397646       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:02:05.399968       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6a9e72cff791671c4897ad13ee17d7cde68ffcfd352cb407e01b44a4e1d21988] <==
	I1115 10:02:06.243866       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:02:06.280546       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:02:06.280612       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:02:06.288541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:09.745648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:14.006923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:17.607369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:20.660919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:23.684201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:23.689369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:02:23.689560       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:02:23.689747       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-430513_cc39e367-3afd-4738-bb9b-fc1a6cc09f16!
	I1115 10:02:23.689697       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd082bdf-d760-43e6-b6b6-335a4fbc7891", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-430513_cc39e367-3afd-4738-bb9b-fc1a6cc09f16 became leader
	W1115 10:02:23.693178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:23.698130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:02:23.790091       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-430513_cc39e367-3afd-4738-bb9b-fc1a6cc09f16!
	W1115 10:02:25.701939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:25.817989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:27.822905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:27.829101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:29.832296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:29.837521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-430513 -n embed-certs-430513
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-430513 -n embed-certs-430513: exit status 2 (350.537067ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-430513 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-679865 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-679865 --alsologtostderr -v=1: exit status 80 (1.885710564s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-679865 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:02:52.466920  652913 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:02:52.467236  652913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:52.467249  652913 out.go:374] Setting ErrFile to fd 2...
	I1115 10:02:52.467254  652913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:52.467503  652913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:02:52.467759  652913 out.go:368] Setting JSON to false
	I1115 10:02:52.467840  652913 mustload.go:66] Loading cluster: default-k8s-diff-port-679865
	I1115 10:02:52.468212  652913 config.go:182] Loaded profile config "default-k8s-diff-port-679865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:52.468660  652913 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-679865 --format={{.State.Status}}
	I1115 10:02:52.487719  652913 host.go:66] Checking if "default-k8s-diff-port-679865" exists ...
	I1115 10:02:52.488103  652913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:02:52.550270  652913 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-15 10:02:52.539752357 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:02:52.550931  652913 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-679865 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1115 10:02:52.552959  652913 out.go:179] * Pausing node default-k8s-diff-port-679865 ... 
	I1115 10:02:52.554143  652913 host.go:66] Checking if "default-k8s-diff-port-679865" exists ...
	I1115 10:02:52.554429  652913 ssh_runner.go:195] Run: systemctl --version
	I1115 10:02:52.554477  652913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-679865
	I1115 10:02:52.573134  652913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33479 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/default-k8s-diff-port-679865/id_rsa Username:docker}
	I1115 10:02:52.668563  652913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:02:52.682037  652913 pause.go:52] kubelet running: true
	I1115 10:02:52.682110  652913 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:02:52.854484  652913 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:02:52.854595  652913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:02:52.934174  652913 cri.go:89] found id: "41c0918e1f139e5b9c79fee38e2fd7c53a8fdec337292205b4d7fa1e7985ddb2"
	I1115 10:02:52.934201  652913 cri.go:89] found id: "32fe67745ed10f95d2f17825b82c788a9ff22653f0f715cfcd3760aa162dd40a"
	I1115 10:02:52.934207  652913 cri.go:89] found id: "b641794e62bea8b62572b411355c7f914cf43c7562c880b8c4edb09ed1669019"
	I1115 10:02:52.934212  652913 cri.go:89] found id: "d8915a281afaa6736017a3530f1781a5398760b8d656d748a1d9e9da3d690f31"
	I1115 10:02:52.934216  652913 cri.go:89] found id: "b0faf6ec7f64ca9800ab743771a847d1b3a7eb0f8db4a21455d9a12122d0372d"
	I1115 10:02:52.934221  652913 cri.go:89] found id: "97ee6a21580e9b7957b3dcf359e11e5b217e1a40e090ac2ee838797b9fdce0cc"
	I1115 10:02:52.934225  652913 cri.go:89] found id: "35c85b6acec1d4f4a155901044f09a0aad4f8ee6965e9a163bb790680c84c184"
	I1115 10:02:52.934230  652913 cri.go:89] found id: "0d7cda73760c10da27ca408e9cf406330d687485abfb473948a9af8b77257d98"
	I1115 10:02:52.934234  652913 cri.go:89] found id: "9282ef22a41e45b12389c8dd7333237e091c1de52b31375f6caae152743253eb"
	I1115 10:02:52.934242  652913 cri.go:89] found id: "d905bb086e1338902f1ad7c01443492f6ff71442781f3952bf847f849778f855"
	I1115 10:02:52.934246  652913 cri.go:89] found id: "a12019be5efb212443fa3cd0d63f001ce894d1d08de1f00d096804524401e2cf"
	I1115 10:02:52.934250  652913 cri.go:89] found id: ""
	I1115 10:02:52.934294  652913 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:02:52.950159  652913 retry.go:31] will retry after 252.929741ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:02:52Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:02:53.203589  652913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:02:53.221153  652913 pause.go:52] kubelet running: false
	I1115 10:02:53.221214  652913 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:02:53.467773  652913 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:02:53.467865  652913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:02:53.569212  652913 cri.go:89] found id: "41c0918e1f139e5b9c79fee38e2fd7c53a8fdec337292205b4d7fa1e7985ddb2"
	I1115 10:02:53.569237  652913 cri.go:89] found id: "32fe67745ed10f95d2f17825b82c788a9ff22653f0f715cfcd3760aa162dd40a"
	I1115 10:02:53.569243  652913 cri.go:89] found id: "b641794e62bea8b62572b411355c7f914cf43c7562c880b8c4edb09ed1669019"
	I1115 10:02:53.569248  652913 cri.go:89] found id: "d8915a281afaa6736017a3530f1781a5398760b8d656d748a1d9e9da3d690f31"
	I1115 10:02:53.569252  652913 cri.go:89] found id: "b0faf6ec7f64ca9800ab743771a847d1b3a7eb0f8db4a21455d9a12122d0372d"
	I1115 10:02:53.569257  652913 cri.go:89] found id: "97ee6a21580e9b7957b3dcf359e11e5b217e1a40e090ac2ee838797b9fdce0cc"
	I1115 10:02:53.569260  652913 cri.go:89] found id: "35c85b6acec1d4f4a155901044f09a0aad4f8ee6965e9a163bb790680c84c184"
	I1115 10:02:53.569265  652913 cri.go:89] found id: "0d7cda73760c10da27ca408e9cf406330d687485abfb473948a9af8b77257d98"
	I1115 10:02:53.569268  652913 cri.go:89] found id: "9282ef22a41e45b12389c8dd7333237e091c1de52b31375f6caae152743253eb"
	I1115 10:02:53.569282  652913 cri.go:89] found id: "d905bb086e1338902f1ad7c01443492f6ff71442781f3952bf847f849778f855"
	I1115 10:02:53.569289  652913 cri.go:89] found id: "a12019be5efb212443fa3cd0d63f001ce894d1d08de1f00d096804524401e2cf"
	I1115 10:02:53.569294  652913 cri.go:89] found id: ""
	I1115 10:02:53.569339  652913 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:02:53.585383  652913 retry.go:31] will retry after 315.672664ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:02:53Z" level=error msg="open /run/runc: no such file or directory"
	I1115 10:02:53.901728  652913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:02:53.920130  652913 pause.go:52] kubelet running: false
	I1115 10:02:53.920273  652913 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1115 10:02:54.148920  652913 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1115 10:02:54.149111  652913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1115 10:02:54.256815  652913 cri.go:89] found id: "41c0918e1f139e5b9c79fee38e2fd7c53a8fdec337292205b4d7fa1e7985ddb2"
	I1115 10:02:54.256847  652913 cri.go:89] found id: "32fe67745ed10f95d2f17825b82c788a9ff22653f0f715cfcd3760aa162dd40a"
	I1115 10:02:54.256853  652913 cri.go:89] found id: "b641794e62bea8b62572b411355c7f914cf43c7562c880b8c4edb09ed1669019"
	I1115 10:02:54.256857  652913 cri.go:89] found id: "d8915a281afaa6736017a3530f1781a5398760b8d656d748a1d9e9da3d690f31"
	I1115 10:02:54.256862  652913 cri.go:89] found id: "b0faf6ec7f64ca9800ab743771a847d1b3a7eb0f8db4a21455d9a12122d0372d"
	I1115 10:02:54.256867  652913 cri.go:89] found id: "97ee6a21580e9b7957b3dcf359e11e5b217e1a40e090ac2ee838797b9fdce0cc"
	I1115 10:02:54.256872  652913 cri.go:89] found id: "35c85b6acec1d4f4a155901044f09a0aad4f8ee6965e9a163bb790680c84c184"
	I1115 10:02:54.256876  652913 cri.go:89] found id: "0d7cda73760c10da27ca408e9cf406330d687485abfb473948a9af8b77257d98"
	I1115 10:02:54.256879  652913 cri.go:89] found id: "9282ef22a41e45b12389c8dd7333237e091c1de52b31375f6caae152743253eb"
	I1115 10:02:54.256887  652913 cri.go:89] found id: "d905bb086e1338902f1ad7c01443492f6ff71442781f3952bf847f849778f855"
	I1115 10:02:54.256891  652913 cri.go:89] found id: "a12019be5efb212443fa3cd0d63f001ce894d1d08de1f00d096804524401e2cf"
	I1115 10:02:54.256895  652913 cri.go:89] found id: ""
	I1115 10:02:54.256939  652913 ssh_runner.go:195] Run: sudo runc list -f json
	I1115 10:02:54.276543  652913 out.go:203] 
	W1115 10:02:54.278095  652913 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:02:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T10:02:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1115 10:02:54.278116  652913 out.go:285] * 
	* 
	W1115 10:02:54.283817  652913 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1115 10:02:54.286505  652913 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-679865 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-679865
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-679865:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2",
	        "Created": "2025-11-15T10:00:47.592632721Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 635645,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:01:55.739124371Z",
	            "FinishedAt": "2025-11-15T10:01:54.535544936Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2/hosts",
	        "LogPath": "/var/lib/docker/containers/0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2/0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2-json.log",
	        "Name": "/default-k8s-diff-port-679865",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-679865:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-679865",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2",
	                "LowerDir": "/var/lib/docker/overlay2/c9a47e17df51e0706eb06fed8bfcae68caad912487e3e04528cdc868dad95f4e-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9a47e17df51e0706eb06fed8bfcae68caad912487e3e04528cdc868dad95f4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9a47e17df51e0706eb06fed8bfcae68caad912487e3e04528cdc868dad95f4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9a47e17df51e0706eb06fed8bfcae68caad912487e3e04528cdc868dad95f4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-679865",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-679865/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-679865",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-679865",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-679865",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "156d2e6f90aec03f84e3091ccb2cd6c454a72268eb4b7022e6f0b6c227d6fd7f",
	            "SandboxKey": "/var/run/docker/netns/156d2e6f90ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-679865": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0a7ab291fd7d7a6f03caec52507c3e2e0702cb6e9e4295365d7aba23864f9771",
	                    "EndpointID": "d33aa3f488ae2fbbef4ff7321fbefac99e554b785c43f8e115c621aa06ab5257",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "fe:2b:bd:5f:c3:e5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-679865",
	                        "0b40f9321403"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-679865 -n default-k8s-diff-port-679865
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-679865 -n default-k8s-diff-port-679865: exit status 2 (421.229348ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-679865 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-679865 logs -n 25: (1.640200759s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-034018 sudo docker system info                                                                                                                             │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cri-dockerd --version                                                                                                                          │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo containerd config dump                                                                                                                         │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo crio config                                                                                                                                    │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ delete  │ -p auto-034018                                                                                                                                                     │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ start   │ -p calico-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-034018                │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ image   │ embed-certs-430513 image list --format=json                                                                                                                        │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ pause   │ -p embed-certs-430513 --alsologtostderr -v=1                                                                                                                       │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ delete  │ -p embed-certs-430513                                                                                                                                              │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ delete  │ -p embed-certs-430513                                                                                                                                              │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ start   │ -p custom-flannel-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p kindnet-034018 pgrep -a kubelet                                                                                                                                 │ kindnet-034018               │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ image   │ default-k8s-diff-port-679865 image list --format=json                                                                                                              │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ pause   │ -p default-k8s-diff-port-679865 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:02:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:02:35.265541  649367 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:02:35.265685  649367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:35.265696  649367 out.go:374] Setting ErrFile to fd 2...
	I1115 10:02:35.265703  649367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:35.266453  649367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:02:35.267290  649367 out.go:368] Setting JSON to false
	I1115 10:02:35.268837  649367 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6296,"bootTime":1763194659,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:02:35.268949  649367 start.go:143] virtualization: kvm guest
	I1115 10:02:35.270822  649367 out.go:179] * [custom-flannel-034018] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:02:35.272526  649367 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:02:35.272554  649367 notify.go:221] Checking for updates...
	I1115 10:02:35.275314  649367 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:02:35.276558  649367 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:02:35.277888  649367 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 10:02:35.279068  649367 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:02:35.280252  649367 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:02:35.281908  649367 config.go:182] Loaded profile config "calico-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:35.282019  649367 config.go:182] Loaded profile config "default-k8s-diff-port-679865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:35.282092  649367 config.go:182] Loaded profile config "kindnet-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:35.282181  649367 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:02:35.310410  649367 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:02:35.310552  649367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:02:35.377856  649367 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:02:35.368049617 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:02:35.378022  649367 docker.go:319] overlay module found
	I1115 10:02:35.380011  649367 out.go:179] * Using the docker driver based on user configuration
	I1115 10:02:35.381429  649367 start.go:309] selected driver: docker
	I1115 10:02:35.381450  649367 start.go:930] validating driver "docker" against <nil>
	I1115 10:02:35.381468  649367 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:02:35.382147  649367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:02:35.445272  649367 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:02:35.434873974 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:02:35.445532  649367 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:02:35.445778  649367 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:02:35.447611  649367 out.go:179] * Using Docker driver with root privileges
	I1115 10:02:35.448801  649367 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1115 10:02:35.448834  649367 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1115 10:02:35.448920  649367 start.go:353] cluster config:
	{Name:custom-flannel-034018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-034018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:02:35.450260  649367 out.go:179] * Starting "custom-flannel-034018" primary control-plane node in "custom-flannel-034018" cluster
	I1115 10:02:35.451343  649367 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:02:35.452583  649367 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:02:35.453593  649367 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:02:35.453622  649367 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:02:35.453637  649367 cache.go:65] Caching tarball of preloaded images
	I1115 10:02:35.453686  649367 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:02:35.453740  649367 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:02:35.453757  649367 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:02:35.453870  649367 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/config.json ...
	I1115 10:02:35.453895  649367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/config.json: {Name:mkef5e2dcd913c15d1ebc8389cefd875c35f1fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:35.476800  649367 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:02:35.476822  649367 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:02:35.476838  649367 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:02:35.476871  649367 start.go:360] acquireMachinesLock for custom-flannel-034018: {Name:mk4f15785111481e33d475ca13a1243eff9b873a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:02:35.476995  649367 start.go:364] duration metric: took 99.687µs to acquireMachinesLock for "custom-flannel-034018"
	I1115 10:02:35.477027  649367 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-034018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-034018 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:02:35.477118  649367 start.go:125] createHost starting for "" (driver="docker")
	W1115 10:02:31.214615  635342 pod_ready.go:104] pod "coredns-66bc5c9577-wknnh" is not "Ready", error: <nil>
	W1115 10:02:33.713645  635342 pod_ready.go:104] pod "coredns-66bc5c9577-wknnh" is not "Ready", error: <nil>
	I1115 10:02:32.589887  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:02:32.609503  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/calico-034018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 10:02:32.628339  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/calico-034018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:02:32.648486  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/calico-034018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:02:32.668564  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/calico-034018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:02:32.689944  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:02:32.722060  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 10:02:32.743088  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 10:02:32.763297  644840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:02:32.777113  644840 ssh_runner.go:195] Run: openssl version
	I1115 10:02:32.783950  644840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:02:32.792687  644840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:02:32.796649  644840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:02:32.796708  644840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:02:32.831764  644840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:02:32.841081  644840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 10:02:32.850213  644840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 10:02:32.854192  644840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 10:02:32.854253  644840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 10:02:32.891611  644840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 10:02:32.902336  644840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 10:02:32.912294  644840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 10:02:32.916476  644840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 10:02:32.916547  644840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 10:02:32.952292  644840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:02:32.962175  644840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:02:32.966152  644840 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:02:32.966209  644840 kubeadm.go:401] StartCluster: {Name:calico-034018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-034018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:02:32.966282  644840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:02:32.966324  644840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:02:32.994915  644840 cri.go:89] found id: ""
	I1115 10:02:32.994998  644840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:02:33.003677  644840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:02:33.012123  644840 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:02:33.012208  644840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:02:33.020447  644840 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:02:33.020465  644840 kubeadm.go:158] found existing configuration files:
	
	I1115 10:02:33.020513  644840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:02:33.028241  644840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:02:33.028298  644840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:02:33.037048  644840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:02:33.046060  644840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:02:33.046123  644840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:02:33.054791  644840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:02:33.064606  644840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:02:33.064672  644840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:02:33.073644  644840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:02:33.082193  644840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:02:33.082248  644840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:02:33.090626  644840 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:02:33.152361  644840 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:02:33.213232  644840 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1115 10:02:33.823730  636459 node_ready.go:57] node "kindnet-034018" has "Ready":"False" status (will retry)
	W1115 10:02:35.824154  636459 node_ready.go:57] node "kindnet-034018" has "Ready":"False" status (will retry)
	W1115 10:02:37.824454  636459 node_ready.go:57] node "kindnet-034018" has "Ready":"False" status (will retry)
	I1115 10:02:35.479073  649367 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:02:35.479333  649367 start.go:159] libmachine.API.Create for "custom-flannel-034018" (driver="docker")
	I1115 10:02:35.479408  649367 client.go:173] LocalClient.Create starting
	I1115 10:02:35.479520  649367 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem
	I1115 10:02:35.479569  649367 main.go:143] libmachine: Decoding PEM data...
	I1115 10:02:35.479594  649367 main.go:143] libmachine: Parsing certificate...
	I1115 10:02:35.479676  649367 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem
	I1115 10:02:35.479704  649367 main.go:143] libmachine: Decoding PEM data...
	I1115 10:02:35.479720  649367 main.go:143] libmachine: Parsing certificate...
	I1115 10:02:35.480092  649367 cli_runner.go:164] Run: docker network inspect custom-flannel-034018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:02:35.500373  649367 cli_runner.go:211] docker network inspect custom-flannel-034018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:02:35.500480  649367 network_create.go:284] running [docker network inspect custom-flannel-034018] to gather additional debugging logs...
	I1115 10:02:35.500505  649367 cli_runner.go:164] Run: docker network inspect custom-flannel-034018
	W1115 10:02:35.519840  649367 cli_runner.go:211] docker network inspect custom-flannel-034018 returned with exit code 1
	I1115 10:02:35.519898  649367 network_create.go:287] error running [docker network inspect custom-flannel-034018]: docker network inspect custom-flannel-034018: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-034018 not found
	I1115 10:02:35.519922  649367 network_create.go:289] output of [docker network inspect custom-flannel-034018]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-034018 not found
	
	** /stderr **
	I1115 10:02:35.520063  649367 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:02:35.539803  649367 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a8fb985664d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:ab:70:dd:9f:65} reservation:<nil>}
	I1115 10:02:35.540569  649367 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cc9c79f9c19e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:9a:52:90:2e:14} reservation:<nil>}
	I1115 10:02:35.541091  649367 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-309565720ebf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:66:38:13:6a:5d} reservation:<nil>}
	I1115 10:02:35.542262  649367 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eea7b0}
	I1115 10:02:35.542305  649367 network_create.go:124] attempt to create docker network custom-flannel-034018 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1115 10:02:35.542370  649367 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-034018 custom-flannel-034018
	I1115 10:02:35.592669  649367 network_create.go:108] docker network custom-flannel-034018 192.168.76.0/24 created
	I1115 10:02:35.592701  649367 kic.go:121] calculated static IP "192.168.76.2" for the "custom-flannel-034018" container
	I1115 10:02:35.592774  649367 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:02:35.610664  649367 cli_runner.go:164] Run: docker volume create custom-flannel-034018 --label name.minikube.sigs.k8s.io=custom-flannel-034018 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:02:35.630035  649367 oci.go:103] Successfully created a docker volume custom-flannel-034018
	I1115 10:02:35.630130  649367 cli_runner.go:164] Run: docker run --rm --name custom-flannel-034018-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-034018 --entrypoint /usr/bin/test -v custom-flannel-034018:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:02:36.034487  649367 oci.go:107] Successfully prepared a docker volume custom-flannel-034018
	I1115 10:02:36.034568  649367 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:02:36.034581  649367 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:02:36.034643  649367 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-034018:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1115 10:02:35.714140  635342 pod_ready.go:104] pod "coredns-66bc5c9577-wknnh" is not "Ready", error: <nil>
	W1115 10:02:38.213504  635342 pod_ready.go:104] pod "coredns-66bc5c9577-wknnh" is not "Ready", error: <nil>
	I1115 10:02:39.226488  635342 pod_ready.go:94] pod "coredns-66bc5c9577-wknnh" is "Ready"
	I1115 10:02:39.226518  635342 pod_ready.go:86] duration metric: took 31.518592512s for pod "coredns-66bc5c9577-wknnh" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:39.229611  635342 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:39.512764  635342 pod_ready.go:94] pod "etcd-default-k8s-diff-port-679865" is "Ready"
	I1115 10:02:39.512795  635342 pod_ready.go:86] duration metric: took 283.155084ms for pod "etcd-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:39.515158  635342 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:39.519946  635342 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-679865" is "Ready"
	I1115 10:02:39.519976  635342 pod_ready.go:86] duration metric: took 4.795161ms for pod "kube-apiserver-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:39.522139  635342 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:39.526209  635342 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-679865" is "Ready"
	I1115 10:02:39.526231  635342 pod_ready.go:86] duration metric: took 4.071228ms for pod "kube-controller-manager-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:39.709576  635342 pod_ready.go:83] waiting for pod "kube-proxy-qhrzp" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:40.012248  635342 pod_ready.go:94] pod "kube-proxy-qhrzp" is "Ready"
	I1115 10:02:40.012275  635342 pod_ready.go:86] duration metric: took 302.672043ms for pod "kube-proxy-qhrzp" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:40.211612  635342 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:40.613299  635342 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-679865" is "Ready"
	I1115 10:02:40.613329  635342 pod_ready.go:86] duration metric: took 401.68638ms for pod "kube-scheduler-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:40.613344  635342 pod_ready.go:40] duration metric: took 32.96835097s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:02:40.676570  635342 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:02:40.678784  635342 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-679865" cluster and "default" namespace by default
	I1115 10:02:39.509359  636459 node_ready.go:49] node "kindnet-034018" is "Ready"
	I1115 10:02:39.509410  636459 node_ready.go:38] duration metric: took 12.189223332s for node "kindnet-034018" to be "Ready" ...
	I1115 10:02:39.509429  636459 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:02:39.509486  636459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:02:39.525100  636459 api_server.go:72] duration metric: took 13.192925177s to wait for apiserver process to appear ...
	I1115 10:02:39.525128  636459 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:02:39.525152  636459 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:02:39.529487  636459 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:02:39.530435  636459 api_server.go:141] control plane version: v1.34.1
	I1115 10:02:39.530460  636459 api_server.go:131] duration metric: took 5.324809ms to wait for apiserver health ...
	I1115 10:02:39.530468  636459 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:02:39.534134  636459 system_pods.go:59] 8 kube-system pods found
	I1115 10:02:39.534180  636459 system_pods.go:61] "coredns-66bc5c9577-wztnb" [d1380b87-fde4-45c8-9981-693d61cf7cd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:02:39.534190  636459 system_pods.go:61] "etcd-kindnet-034018" [c417ea42-17ca-4157-9325-798a7021fa82] Running
	I1115 10:02:39.534198  636459 system_pods.go:61] "kindnet-w8lq4" [889f4ca2-eb36-4c3a-b40f-058c8814a6af] Running
	I1115 10:02:39.534203  636459 system_pods.go:61] "kube-apiserver-kindnet-034018" [9a28e892-f55f-4a3f-a444-53a9b40a4f94] Running
	I1115 10:02:39.534208  636459 system_pods.go:61] "kube-controller-manager-kindnet-034018" [5b825fd8-0e25-41e3-a167-01929fd3db52] Running
	I1115 10:02:39.534212  636459 system_pods.go:61] "kube-proxy-7vzzl" [7147322a-cfc7-444d-be65-a6794547494c] Running
	I1115 10:02:39.534218  636459 system_pods.go:61] "kube-scheduler-kindnet-034018" [6a490137-0b41-4830-8604-d9900a91e8b4] Running
	I1115 10:02:39.534222  636459 system_pods.go:61] "storage-provisioner" [5fab2882-6c99-44ce-9142-546caf7319b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:02:39.534230  636459 system_pods.go:74] duration metric: took 3.756652ms to wait for pod list to return data ...
	I1115 10:02:39.534240  636459 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:02:39.537078  636459 default_sa.go:45] found service account: "default"
	I1115 10:02:39.537099  636459 default_sa.go:55] duration metric: took 2.852959ms for default service account to be created ...
	I1115 10:02:39.537107  636459 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:02:39.540496  636459 system_pods.go:86] 8 kube-system pods found
	I1115 10:02:39.540525  636459 system_pods.go:89] "coredns-66bc5c9577-wztnb" [d1380b87-fde4-45c8-9981-693d61cf7cd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:02:39.540536  636459 system_pods.go:89] "etcd-kindnet-034018" [c417ea42-17ca-4157-9325-798a7021fa82] Running
	I1115 10:02:39.540545  636459 system_pods.go:89] "kindnet-w8lq4" [889f4ca2-eb36-4c3a-b40f-058c8814a6af] Running
	I1115 10:02:39.540549  636459 system_pods.go:89] "kube-apiserver-kindnet-034018" [9a28e892-f55f-4a3f-a444-53a9b40a4f94] Running
	I1115 10:02:39.540552  636459 system_pods.go:89] "kube-controller-manager-kindnet-034018" [5b825fd8-0e25-41e3-a167-01929fd3db52] Running
	I1115 10:02:39.540556  636459 system_pods.go:89] "kube-proxy-7vzzl" [7147322a-cfc7-444d-be65-a6794547494c] Running
	I1115 10:02:39.540560  636459 system_pods.go:89] "kube-scheduler-kindnet-034018" [6a490137-0b41-4830-8604-d9900a91e8b4] Running
	I1115 10:02:39.540570  636459 system_pods.go:89] "storage-provisioner" [5fab2882-6c99-44ce-9142-546caf7319b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:02:39.540596  636459 retry.go:31] will retry after 241.274525ms: missing components: kube-dns
	I1115 10:02:39.785835  636459 system_pods.go:86] 8 kube-system pods found
	I1115 10:02:39.785874  636459 system_pods.go:89] "coredns-66bc5c9577-wztnb" [d1380b87-fde4-45c8-9981-693d61cf7cd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:02:39.785884  636459 system_pods.go:89] "etcd-kindnet-034018" [c417ea42-17ca-4157-9325-798a7021fa82] Running
	I1115 10:02:39.785892  636459 system_pods.go:89] "kindnet-w8lq4" [889f4ca2-eb36-4c3a-b40f-058c8814a6af] Running
	I1115 10:02:39.785898  636459 system_pods.go:89] "kube-apiserver-kindnet-034018" [9a28e892-f55f-4a3f-a444-53a9b40a4f94] Running
	I1115 10:02:39.785903  636459 system_pods.go:89] "kube-controller-manager-kindnet-034018" [5b825fd8-0e25-41e3-a167-01929fd3db52] Running
	I1115 10:02:39.785910  636459 system_pods.go:89] "kube-proxy-7vzzl" [7147322a-cfc7-444d-be65-a6794547494c] Running
	I1115 10:02:39.785914  636459 system_pods.go:89] "kube-scheduler-kindnet-034018" [6a490137-0b41-4830-8604-d9900a91e8b4] Running
	I1115 10:02:39.785921  636459 system_pods.go:89] "storage-provisioner" [5fab2882-6c99-44ce-9142-546caf7319b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:02:39.785944  636459 retry.go:31] will retry after 277.873372ms: missing components: kube-dns
	I1115 10:02:40.256195  636459 system_pods.go:86] 8 kube-system pods found
	I1115 10:02:40.256238  636459 system_pods.go:89] "coredns-66bc5c9577-wztnb" [d1380b87-fde4-45c8-9981-693d61cf7cd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:02:40.256246  636459 system_pods.go:89] "etcd-kindnet-034018" [c417ea42-17ca-4157-9325-798a7021fa82] Running
	I1115 10:02:40.256253  636459 system_pods.go:89] "kindnet-w8lq4" [889f4ca2-eb36-4c3a-b40f-058c8814a6af] Running
	I1115 10:02:40.256258  636459 system_pods.go:89] "kube-apiserver-kindnet-034018" [9a28e892-f55f-4a3f-a444-53a9b40a4f94] Running
	I1115 10:02:40.256264  636459 system_pods.go:89] "kube-controller-manager-kindnet-034018" [5b825fd8-0e25-41e3-a167-01929fd3db52] Running
	I1115 10:02:40.256270  636459 system_pods.go:89] "kube-proxy-7vzzl" [7147322a-cfc7-444d-be65-a6794547494c] Running
	I1115 10:02:40.256275  636459 system_pods.go:89] "kube-scheduler-kindnet-034018" [6a490137-0b41-4830-8604-d9900a91e8b4] Running
	I1115 10:02:40.256291  636459 system_pods.go:89] "storage-provisioner" [5fab2882-6c99-44ce-9142-546caf7319b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:02:40.256316  636459 retry.go:31] will retry after 437.871457ms: missing components: kube-dns
	I1115 10:02:40.704496  636459 system_pods.go:86] 8 kube-system pods found
	I1115 10:02:40.704548  636459 system_pods.go:89] "coredns-66bc5c9577-wztnb" [d1380b87-fde4-45c8-9981-693d61cf7cd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:02:40.704557  636459 system_pods.go:89] "etcd-kindnet-034018" [c417ea42-17ca-4157-9325-798a7021fa82] Running
	I1115 10:02:40.704567  636459 system_pods.go:89] "kindnet-w8lq4" [889f4ca2-eb36-4c3a-b40f-058c8814a6af] Running
	I1115 10:02:40.704580  636459 system_pods.go:89] "kube-apiserver-kindnet-034018" [9a28e892-f55f-4a3f-a444-53a9b40a4f94] Running
	I1115 10:02:40.704587  636459 system_pods.go:89] "kube-controller-manager-kindnet-034018" [5b825fd8-0e25-41e3-a167-01929fd3db52] Running
	I1115 10:02:40.704600  636459 system_pods.go:89] "kube-proxy-7vzzl" [7147322a-cfc7-444d-be65-a6794547494c] Running
	I1115 10:02:40.704606  636459 system_pods.go:89] "kube-scheduler-kindnet-034018" [6a490137-0b41-4830-8604-d9900a91e8b4] Running
	I1115 10:02:40.704618  636459 system_pods.go:89] "storage-provisioner" [5fab2882-6c99-44ce-9142-546caf7319b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:02:40.704637  636459 retry.go:31] will retry after 444.592689ms: missing components: kube-dns
	I1115 10:02:41.154187  636459 system_pods.go:86] 8 kube-system pods found
	I1115 10:02:41.154223  636459 system_pods.go:89] "coredns-66bc5c9577-wztnb" [d1380b87-fde4-45c8-9981-693d61cf7cd0] Running
	I1115 10:02:41.154231  636459 system_pods.go:89] "etcd-kindnet-034018" [c417ea42-17ca-4157-9325-798a7021fa82] Running
	I1115 10:02:41.154237  636459 system_pods.go:89] "kindnet-w8lq4" [889f4ca2-eb36-4c3a-b40f-058c8814a6af] Running
	I1115 10:02:41.154241  636459 system_pods.go:89] "kube-apiserver-kindnet-034018" [9a28e892-f55f-4a3f-a444-53a9b40a4f94] Running
	I1115 10:02:41.154246  636459 system_pods.go:89] "kube-controller-manager-kindnet-034018" [5b825fd8-0e25-41e3-a167-01929fd3db52] Running
	I1115 10:02:41.154251  636459 system_pods.go:89] "kube-proxy-7vzzl" [7147322a-cfc7-444d-be65-a6794547494c] Running
	I1115 10:02:41.154258  636459 system_pods.go:89] "kube-scheduler-kindnet-034018" [6a490137-0b41-4830-8604-d9900a91e8b4] Running
	I1115 10:02:41.154263  636459 system_pods.go:89] "storage-provisioner" [5fab2882-6c99-44ce-9142-546caf7319b0] Running
	I1115 10:02:41.154276  636459 system_pods.go:126] duration metric: took 1.617160811s to wait for k8s-apps to be running ...
	I1115 10:02:41.154286  636459 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:02:41.154338  636459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:02:41.174088  636459 system_svc.go:56] duration metric: took 19.774098ms WaitForService to wait for kubelet
	I1115 10:02:41.174132  636459 kubeadm.go:587] duration metric: took 14.841962058s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:02:41.174158  636459 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:02:41.178568  636459 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:02:41.178742  636459 node_conditions.go:123] node cpu capacity is 8
	I1115 10:02:41.178762  636459 node_conditions.go:105] duration metric: took 4.599139ms to run NodePressure ...
	I1115 10:02:41.178777  636459 start.go:242] waiting for startup goroutines ...
	I1115 10:02:41.178813  636459 start.go:247] waiting for cluster config update ...
	I1115 10:02:41.178827  636459 start.go:256] writing updated cluster config ...
	I1115 10:02:41.179104  636459 ssh_runner.go:195] Run: rm -f paused
	I1115 10:02:41.184181  636459 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:02:41.190294  636459 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wztnb" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.196375  636459 pod_ready.go:94] pod "coredns-66bc5c9577-wztnb" is "Ready"
	I1115 10:02:41.196440  636459 pod_ready.go:86] duration metric: took 6.115378ms for pod "coredns-66bc5c9577-wztnb" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.199523  636459 pod_ready.go:83] waiting for pod "etcd-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.204842  636459 pod_ready.go:94] pod "etcd-kindnet-034018" is "Ready"
	I1115 10:02:41.204871  636459 pod_ready.go:86] duration metric: took 5.324021ms for pod "etcd-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.207644  636459 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.212751  636459 pod_ready.go:94] pod "kube-apiserver-kindnet-034018" is "Ready"
	I1115 10:02:41.212787  636459 pod_ready.go:86] duration metric: took 5.114219ms for pod "kube-apiserver-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.215274  636459 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.590268  636459 pod_ready.go:94] pod "kube-controller-manager-kindnet-034018" is "Ready"
	I1115 10:02:41.590301  636459 pod_ready.go:86] duration metric: took 375.00087ms for pod "kube-controller-manager-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.790369  636459 pod_ready.go:83] waiting for pod "kube-proxy-7vzzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:42.189644  636459 pod_ready.go:94] pod "kube-proxy-7vzzl" is "Ready"
	I1115 10:02:42.189678  636459 pod_ready.go:86] duration metric: took 399.24004ms for pod "kube-proxy-7vzzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:42.388910  636459 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:42.788634  636459 pod_ready.go:94] pod "kube-scheduler-kindnet-034018" is "Ready"
	I1115 10:02:42.788668  636459 pod_ready.go:86] duration metric: took 399.721581ms for pod "kube-scheduler-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:42.788684  636459 pod_ready.go:40] duration metric: took 1.604374209s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:02:42.845445  636459 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:02:42.847648  636459 out.go:179] * Done! kubectl is now configured to use "kindnet-034018" cluster and "default" namespace by default
	I1115 10:02:40.637434  649367 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-034018:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.602714234s)
	I1115 10:02:40.637481  649367 kic.go:203] duration metric: took 4.602893214s to extract preloaded images to volume ...
	W1115 10:02:40.637571  649367 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1115 10:02:40.637623  649367 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1115 10:02:40.637671  649367 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:02:40.706803  649367 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-034018 --name custom-flannel-034018 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-034018 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-034018 --network custom-flannel-034018 --ip 192.168.76.2 --volume custom-flannel-034018:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:02:41.105173  649367 cli_runner.go:164] Run: docker container inspect custom-flannel-034018 --format={{.State.Running}}
	I1115 10:02:41.133130  649367 cli_runner.go:164] Run: docker container inspect custom-flannel-034018 --format={{.State.Status}}
	I1115 10:02:41.161410  649367 cli_runner.go:164] Run: docker exec custom-flannel-034018 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:02:41.224450  649367 oci.go:144] the created container "custom-flannel-034018" has a running status.
	I1115 10:02:41.224486  649367 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/custom-flannel-034018/id_rsa...
	I1115 10:02:41.950140  649367 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-355485/.minikube/machines/custom-flannel-034018/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:02:41.980824  649367 cli_runner.go:164] Run: docker container inspect custom-flannel-034018 --format={{.State.Status}}
	I1115 10:02:42.001343  649367 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:02:42.001366  649367 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-034018 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:02:42.051657  649367 cli_runner.go:164] Run: docker container inspect custom-flannel-034018 --format={{.State.Status}}
	I1115 10:02:42.070893  649367 machine.go:94] provisionDockerMachine start ...
	I1115 10:02:42.070990  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:42.091910  649367 main.go:143] libmachine: Using SSH client type: native
	I1115 10:02:42.092295  649367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1115 10:02:42.092320  649367 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:02:42.225535  649367 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-034018
	
	I1115 10:02:42.225566  649367 ubuntu.go:182] provisioning hostname "custom-flannel-034018"
	I1115 10:02:42.225620  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:42.247681  649367 main.go:143] libmachine: Using SSH client type: native
	I1115 10:02:42.247965  649367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1115 10:02:42.247983  649367 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-034018 && echo "custom-flannel-034018" | sudo tee /etc/hostname
	I1115 10:02:42.401785  649367 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-034018
	
	I1115 10:02:42.401865  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:42.425094  649367 main.go:143] libmachine: Using SSH client type: native
	I1115 10:02:42.425472  649367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1115 10:02:42.425498  649367 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-034018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-034018/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-034018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:02:42.567686  649367 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:02:42.567718  649367 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 10:02:42.567741  649367 ubuntu.go:190] setting up certificates
	I1115 10:02:42.567753  649367 provision.go:84] configureAuth start
	I1115 10:02:42.567810  649367 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-034018
	I1115 10:02:42.592209  649367 provision.go:143] copyHostCerts
	I1115 10:02:42.592272  649367 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 10:02:42.592284  649367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 10:02:42.592362  649367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 10:02:42.592528  649367 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 10:02:42.592544  649367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 10:02:42.592590  649367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 10:02:42.592729  649367 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 10:02:42.592738  649367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 10:02:42.592784  649367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 10:02:42.592888  649367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-034018 san=[127.0.0.1 192.168.76.2 custom-flannel-034018 localhost minikube]
	I1115 10:02:42.852117  649367 provision.go:177] copyRemoteCerts
	I1115 10:02:42.852188  649367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:02:42.852241  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:42.875649  649367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/custom-flannel-034018/id_rsa Username:docker}
	I1115 10:02:42.977692  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:02:42.998053  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1115 10:02:43.016603  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:02:43.036718  649367 provision.go:87] duration metric: took 468.860558ms to configureAuth
	I1115 10:02:43.036752  649367 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:02:43.037668  649367 config.go:182] Loaded profile config "custom-flannel-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:43.037824  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:43.060967  649367 main.go:143] libmachine: Using SSH client type: native
	I1115 10:02:43.061240  649367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1115 10:02:43.061257  649367 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:02:43.365912  649367 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:02:43.365952  649367 machine.go:97] duration metric: took 1.295033439s to provisionDockerMachine
	I1115 10:02:43.365967  649367 client.go:176] duration metric: took 7.886547532s to LocalClient.Create
	I1115 10:02:43.365981  649367 start.go:167] duration metric: took 7.886647961s to libmachine.API.Create "custom-flannel-034018"
	I1115 10:02:43.365992  649367 start.go:293] postStartSetup for "custom-flannel-034018" (driver="docker")
	I1115 10:02:43.366006  649367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:02:43.366097  649367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:02:43.366147  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:43.388907  649367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/custom-flannel-034018/id_rsa Username:docker}
	I1115 10:02:43.491404  649367 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:02:43.495642  649367 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:02:43.495678  649367 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:02:43.495691  649367 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 10:02:43.495751  649367 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 10:02:43.495882  649367 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 10:02:43.496015  649367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:02:43.504790  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:02:43.523983  649367 start.go:296] duration metric: took 157.975571ms for postStartSetup
	I1115 10:02:43.524428  649367 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-034018
	I1115 10:02:43.544654  649367 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/config.json ...
	I1115 10:02:43.544927  649367 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:02:43.544982  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:43.563157  649367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/custom-flannel-034018/id_rsa Username:docker}
	I1115 10:02:43.654817  649367 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:02:43.659896  649367 start.go:128] duration metric: took 8.182762549s to createHost
	I1115 10:02:43.659927  649367 start.go:83] releasing machines lock for "custom-flannel-034018", held for 8.182914742s
	I1115 10:02:43.660011  649367 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-034018
	I1115 10:02:43.678527  649367 ssh_runner.go:195] Run: cat /version.json
	I1115 10:02:43.678580  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:43.678623  649367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:02:43.678707  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:43.698985  649367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/custom-flannel-034018/id_rsa Username:docker}
	I1115 10:02:43.698989  649367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/custom-flannel-034018/id_rsa Username:docker}
	I1115 10:02:43.846621  649367 ssh_runner.go:195] Run: systemctl --version
	I1115 10:02:43.853928  649367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:02:43.890879  649367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:02:43.896123  649367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:02:43.896192  649367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:02:43.921624  649367 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:02:43.921648  649367 start.go:496] detecting cgroup driver to use...
	I1115 10:02:43.921695  649367 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 10:02:43.921744  649367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:02:43.938119  649367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:02:43.951157  649367 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:02:43.951216  649367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:02:43.968501  649367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:02:43.988434  649367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:02:44.071514  649367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:02:44.166736  649367 docker.go:234] disabling docker service ...
	I1115 10:02:44.166812  649367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:02:44.185177  649367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:02:44.197791  649367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:02:44.293551  649367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:02:44.397948  649367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:02:44.414107  649367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:02:44.431161  649367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:02:44.431221  649367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:02:44.443162  649367 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 10:02:44.443235  649367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:02:44.453633  649367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:02:44.464869  649367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:02:44.474778  649367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:02:44.484524  649367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:02:44.494716  649367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:02:44.510816  649367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:02:44.521212  649367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:02:44.529754  649367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:02:44.539321  649367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:02:44.636253  649367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:02:44.760268  649367 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:02:44.760423  649367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:02:44.765346  649367 start.go:564] Will wait 60s for crictl version
	I1115 10:02:44.765429  649367 ssh_runner.go:195] Run: which crictl
	I1115 10:02:44.769780  649367 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:02:44.798132  649367 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:02:44.798222  649367 ssh_runner.go:195] Run: crio --version
	I1115 10:02:44.831619  649367 ssh_runner.go:195] Run: crio --version
	I1115 10:02:44.864792  649367 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:02:44.865852  649367 cli_runner.go:164] Run: docker network inspect custom-flannel-034018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:02:44.884591  649367 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:02:44.889135  649367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:02:44.899346  649367 kubeadm.go:884] updating cluster {Name:custom-flannel-034018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-034018 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:02:44.899491  649367 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:02:44.899544  649367 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:02:44.931151  649367 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:02:44.931177  649367 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:02:44.931232  649367 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:02:44.956373  649367 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:02:44.956408  649367 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:02:44.956419  649367 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:02:44.956504  649367 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-034018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-034018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1115 10:02:44.956573  649367 ssh_runner.go:195] Run: crio config
	I1115 10:02:45.005579  649367 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1115 10:02:45.005624  649367 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:02:45.005655  649367 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-034018 NodeName:custom-flannel-034018 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:02:45.005822  649367 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-034018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:02:45.005893  649367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:02:45.015267  649367 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:02:45.015340  649367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:02:45.024303  649367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1115 10:02:45.039846  649367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:02:45.056540  649367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1115 10:02:45.069791  649367 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:02:45.073798  649367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:02:45.084867  649367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:02:45.168545  649367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:02:45.199949  649367 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018 for IP: 192.168.76.2
	I1115 10:02:45.199976  649367 certs.go:195] generating shared ca certs ...
	I1115 10:02:45.199997  649367 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:45.200174  649367 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 10:02:45.200215  649367 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 10:02:45.200224  649367 certs.go:257] generating profile certs ...
	I1115 10:02:45.200276  649367 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/client.key
	I1115 10:02:45.200296  649367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/client.crt with IP's: []
	I1115 10:02:46.397842  644840 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:02:46.397916  644840 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:02:46.398047  644840 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:02:46.398161  644840 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:02:46.398212  644840 kubeadm.go:319] OS: Linux
	I1115 10:02:46.398252  644840 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:02:46.398341  644840 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:02:46.398453  644840 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:02:46.398537  644840 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:02:46.398610  644840 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:02:46.398676  644840 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:02:46.398753  644840 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:02:46.398807  644840 kubeadm.go:319] CGROUPS_IO: enabled
	I1115 10:02:46.398867  644840 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:02:46.398960  644840 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:02:46.399039  644840 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:02:46.399092  644840 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:02:46.400673  644840 out.go:252]   - Generating certificates and keys ...
	I1115 10:02:46.400735  644840 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:02:46.400798  644840 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:02:46.400851  644840 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:02:46.400900  644840 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:02:46.400977  644840 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:02:46.401060  644840 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:02:46.401121  644840 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:02:46.401275  644840 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-034018 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1115 10:02:46.401347  644840 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:02:46.401533  644840 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-034018 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1115 10:02:46.401622  644840 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:02:46.401731  644840 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:02:46.401809  644840 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:02:46.401857  644840 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:02:46.401906  644840 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:02:46.401967  644840 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:02:46.402045  644840 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:02:46.402145  644840 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:02:46.402224  644840 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:02:46.402348  644840 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:02:46.402483  644840 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:02:46.403631  644840 out.go:252]   - Booting up control plane ...
	I1115 10:02:46.403729  644840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:02:46.403801  644840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:02:46.403906  644840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:02:46.404069  644840 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:02:46.404192  644840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:02:46.404354  644840 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:02:46.404510  644840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:02:46.404564  644840 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:02:46.404759  644840 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:02:46.404935  644840 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:02:46.405011  644840 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000967721s
	I1115 10:02:46.405144  644840 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:02:46.405256  644840 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1115 10:02:46.405426  644840 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:02:46.405553  644840 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:02:46.405659  644840 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.625266743s
	I1115 10:02:46.405721  644840 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.120472428s
	I1115 10:02:46.405824  644840 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001410912s
	I1115 10:02:46.405926  644840 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:02:46.406089  644840 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:02:46.406183  644840 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:02:46.406405  644840 kubeadm.go:319] [mark-control-plane] Marking the node calico-034018 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:02:46.406467  644840 kubeadm.go:319] [bootstrap-token] Using token: 4kh1q4.55f6y1bx9do26yqz
	I1115 10:02:46.408136  644840 out.go:252]   - Configuring RBAC rules ...
	I1115 10:02:46.408242  644840 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:02:46.408347  644840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:02:46.408533  644840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:02:46.408715  644840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:02:46.408831  644840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:02:46.408928  644840 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:02:46.409079  644840 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:02:46.409140  644840 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:02:46.409204  644840 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:02:46.409210  644840 kubeadm.go:319] 
	I1115 10:02:46.409256  644840 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:02:46.409264  644840 kubeadm.go:319] 
	I1115 10:02:46.409333  644840 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:02:46.409338  644840 kubeadm.go:319] 
	I1115 10:02:46.409358  644840 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:02:46.409455  644840 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:02:46.409544  644840 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:02:46.409556  644840 kubeadm.go:319] 
	I1115 10:02:46.409637  644840 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:02:46.409643  644840 kubeadm.go:319] 
	I1115 10:02:46.409680  644840 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:02:46.409685  644840 kubeadm.go:319] 
	I1115 10:02:46.409725  644840 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:02:46.409835  644840 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:02:46.409923  644840 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:02:46.409931  644840 kubeadm.go:319] 
	I1115 10:02:46.410033  644840 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:02:46.410128  644840 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:02:46.410136  644840 kubeadm.go:319] 
	I1115 10:02:46.410236  644840 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4kh1q4.55f6y1bx9do26yqz \
	I1115 10:02:46.410378  644840 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac \
	I1115 10:02:46.410509  644840 kubeadm.go:319] 	--control-plane 
	I1115 10:02:46.410537  644840 kubeadm.go:319] 
	I1115 10:02:46.410642  644840 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:02:46.410650  644840 kubeadm.go:319] 
	I1115 10:02:46.410774  644840 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4kh1q4.55f6y1bx9do26yqz \
	I1115 10:02:46.410905  644840 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac 
	I1115 10:02:46.410921  644840 cni.go:84] Creating CNI manager for "calico"
	I1115 10:02:46.412139  644840 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1115 10:02:46.413473  644840 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:02:46.413496  644840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329845 bytes)
	I1115 10:02:46.427915  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:02:47.255957  644840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:02:47.256067  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:47.256091  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-034018 minikube.k8s.io/updated_at=2025_11_15T10_02_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=calico-034018 minikube.k8s.io/primary=true
	I1115 10:02:47.266836  644840 ops.go:34] apiserver oom_adj: -16
	I1115 10:02:47.338584  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:45.344615  649367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/client.crt ...
	I1115 10:02:45.344646  649367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/client.crt: {Name:mk0fa7258f6db3366f793dc089f5f4f45a734d68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:45.344863  649367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/client.key ...
	I1115 10:02:45.344883  649367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/client.key: {Name:mk531401ccd9ebacebc9c03c1cb5c6a2fd502c30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:45.344967  649367 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.key.0b9160d7
	I1115 10:02:45.344983  649367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.crt.0b9160d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1115 10:02:45.578266  649367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.crt.0b9160d7 ...
	I1115 10:02:45.578296  649367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.crt.0b9160d7: {Name:mk7e0faa1be1dded3dd5591d8cfeee4c5b392c0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:45.578477  649367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.key.0b9160d7 ...
	I1115 10:02:45.578494  649367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.key.0b9160d7: {Name:mk7d65b281b31bb0ee355fa18804c1fb68818dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:45.578574  649367 certs.go:382] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.crt.0b9160d7 -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.crt
	I1115 10:02:45.578650  649367 certs.go:386] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.key.0b9160d7 -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.key
	I1115 10:02:45.578704  649367 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.key
	I1115 10:02:45.578719  649367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.crt with IP's: []
	I1115 10:02:45.674869  649367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.crt ...
	I1115 10:02:45.674899  649367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.crt: {Name:mk09d98934bc3afe4ac441b39ae877b57d810611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:45.675100  649367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.key ...
	I1115 10:02:45.675126  649367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.key: {Name:mke8d3e24e45df297d68d3605421fbbe31dc6a23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:45.675369  649367 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 10:02:45.675431  649367 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 10:02:45.675448  649367 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:02:45.675480  649367 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:02:45.675522  649367 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:02:45.675556  649367 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 10:02:45.675613  649367 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:02:45.676180  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:02:45.695150  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:02:45.714177  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:02:45.732547  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:02:45.750887  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 10:02:45.773221  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:02:45.792874  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:02:45.814133  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:02:45.835348  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 10:02:45.854847  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 10:02:45.873112  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:02:45.892545  649367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:02:45.906080  649367 ssh_runner.go:195] Run: openssl version
	I1115 10:02:45.912904  649367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 10:02:45.922929  649367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 10:02:45.927009  649367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 10:02:45.927071  649367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 10:02:45.962659  649367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:02:45.972550  649367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:02:45.982154  649367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:02:45.986305  649367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:02:45.986365  649367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:02:46.022557  649367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:02:46.031829  649367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 10:02:46.040911  649367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 10:02:46.045064  649367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 10:02:46.045134  649367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 10:02:46.085267  649367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 10:02:46.095218  649367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:02:46.099114  649367 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:02:46.099167  649367 kubeadm.go:401] StartCluster: {Name:custom-flannel-034018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-034018 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:02:46.099235  649367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:02:46.099276  649367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:02:46.126585  649367 cri.go:89] found id: ""
	I1115 10:02:46.126643  649367 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:02:46.135099  649367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:02:46.143424  649367 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:02:46.143497  649367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:02:46.151355  649367 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:02:46.151382  649367 kubeadm.go:158] found existing configuration files:
	
	I1115 10:02:46.151488  649367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:02:46.160235  649367 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:02:46.160585  649367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:02:46.168768  649367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:02:46.176550  649367 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:02:46.176609  649367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:02:46.183948  649367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:02:46.192350  649367 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:02:46.192449  649367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:02:46.199987  649367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:02:46.207935  649367 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:02:46.207992  649367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:02:46.215925  649367 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:02:46.276551  649367 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:02:46.340158  649367 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:02:47.838791  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:48.339161  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:48.839494  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:49.339041  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:49.839160  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:50.339543  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:50.839217  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:50.928159  644840 kubeadm.go:1114] duration metric: took 3.672148283s to wait for elevateKubeSystemPrivileges
	I1115 10:02:50.928199  644840 kubeadm.go:403] duration metric: took 17.96199194s to StartCluster
	I1115 10:02:50.928223  644840 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:50.928300  644840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:02:50.930124  644840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:50.930409  644840 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:02:50.930560  644840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:02:50.930762  644840 config.go:182] Loaded profile config "calico-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:50.930712  644840 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:02:50.930800  644840 addons.go:70] Setting storage-provisioner=true in profile "calico-034018"
	I1115 10:02:50.930821  644840 addons.go:70] Setting default-storageclass=true in profile "calico-034018"
	I1115 10:02:50.930827  644840 addons.go:239] Setting addon storage-provisioner=true in "calico-034018"
	I1115 10:02:50.930840  644840 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-034018"
	I1115 10:02:50.930862  644840 host.go:66] Checking if "calico-034018" exists ...
	I1115 10:02:50.931192  644840 cli_runner.go:164] Run: docker container inspect calico-034018 --format={{.State.Status}}
	I1115 10:02:50.931361  644840 cli_runner.go:164] Run: docker container inspect calico-034018 --format={{.State.Status}}
	I1115 10:02:50.936648  644840 out.go:179] * Verifying Kubernetes components...
	I1115 10:02:50.938205  644840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:02:50.957935  644840 addons.go:239] Setting addon default-storageclass=true in "calico-034018"
	I1115 10:02:50.957991  644840 host.go:66] Checking if "calico-034018" exists ...
	I1115 10:02:50.958485  644840 cli_runner.go:164] Run: docker container inspect calico-034018 --format={{.State.Status}}
	I1115 10:02:50.960714  644840 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:02:50.961834  644840 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:02:50.961878  644840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:02:50.961934  644840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034018
	I1115 10:02:50.993443  644840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/calico-034018/id_rsa Username:docker}
	I1115 10:02:50.996985  644840 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:02:50.997010  644840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:02:50.997070  644840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034018
	I1115 10:02:51.026040  644840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/calico-034018/id_rsa Username:docker}
	I1115 10:02:51.055238  644840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:02:51.111782  644840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:02:51.129129  644840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:02:51.171061  644840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:02:51.309151  644840 node_ready.go:35] waiting up to 15m0s for node "calico-034018" to be "Ready" ...
	I1115 10:02:51.309501  644840 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1115 10:02:51.542072  644840 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:02:51.543267  644840 addons.go:515] duration metric: took 612.538136ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:02:51.813891  644840 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-034018" context rescaled to 1 replicas
	
	
	==> CRI-O <==
	Nov 15 10:02:38 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:38.120479974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:38 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:38.12070068Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0a20b7fd253df532d24ed08bc6153ff27ac1de96fcca04f6ef0a92bd8561314f/merged/etc/passwd: no such file or directory"
	Nov 15 10:02:38 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:38.120743445Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0a20b7fd253df532d24ed08bc6153ff27ac1de96fcca04f6ef0a92bd8561314f/merged/etc/group: no such file or directory"
	Nov 15 10:02:38 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:38.121066925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:38 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:38.167356385Z" level=info msg="Created container 41c0918e1f139e5b9c79fee38e2fd7c53a8fdec337292205b4d7fa1e7985ddb2: kube-system/storage-provisioner/storage-provisioner" id=cf58c300-507a-4fc7-af69-f83d9b9640d7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:02:38 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:38.16816217Z" level=info msg="Starting container: 41c0918e1f139e5b9c79fee38e2fd7c53a8fdec337292205b4d7fa1e7985ddb2" id=a40f0ebb-2a58-4fc5-a714-6b36a635a995 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:02:38 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:38.17023239Z" level=info msg="Started container" PID=1705 containerID=41c0918e1f139e5b9c79fee38e2fd7c53a8fdec337292205b4d7fa1e7985ddb2 description=kube-system/storage-provisioner/storage-provisioner id=a40f0ebb-2a58-4fc5-a714-6b36a635a995 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6f78bc59cf594a378b2c405b0bb325c4617f2b7132ee4e5d3415316f4e5feaee
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.967869089Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.972432682Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.972462814Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.972500711Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.975949035Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.975973189Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.975995129Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.979762848Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.979787962Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.979806865Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.983074102Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.983096581Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.983111534Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.98634527Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.986365581Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.986383338Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.989928266Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.989953736Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	41c0918e1f139       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   6f78bc59cf594       storage-provisioner                                    kube-system
	d905bb086e133       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   ea907d0834fa1       dashboard-metrics-scraper-6ffb444bf9-nq268             kubernetes-dashboard
	a12019be5efb2       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   246fdac6aa1b0       kubernetes-dashboard-855c9754f9-24grr                  kubernetes-dashboard
	32fe67745ed10       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           47 seconds ago      Running             kube-proxy                  0                   c76b80ead2463       kube-proxy-qhrzp                                       kube-system
	b641794e62bea       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           47 seconds ago      Running             kindnet-cni                 0                   fa2507d0014a8       kindnet-7j4zt                                          kube-system
	d8915a281afaa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   a86d2eba0bcc6       coredns-66bc5c9577-wknnh                               kube-system
	9724319435f1c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   7be1c140042a1       busybox                                                default
	b0faf6ec7f64c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   6f78bc59cf594       storage-provisioner                                    kube-system
	97ee6a21580e9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   85cbe011c737c       etcd-default-k8s-diff-port-679865                      kube-system
	35c85b6acec1d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   d85d1fd2da2cc       kube-controller-manager-default-k8s-diff-port-679865   kube-system
	0d7cda73760c1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   b461a6a50d523       kube-apiserver-default-k8s-diff-port-679865            kube-system
	9282ef22a41e4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   66e2e21dab3c8       kube-scheduler-default-k8s-diff-port-679865            kube-system
	
	
	==> coredns [d8915a281afaa6736017a3530f1781a5398760b8d656d748a1d9e9da3d690f31] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41651 - 34843 "HINFO IN 3963982386183452308.1339123970555344780. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049673408s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-679865
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-679865
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=default-k8s-diff-port-679865
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_01_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:01:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-679865
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:02:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:02:37 +0000   Sat, 15 Nov 2025 10:01:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:02:37 +0000   Sat, 15 Nov 2025 10:01:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:02:37 +0000   Sat, 15 Nov 2025 10:01:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:02:37 +0000   Sat, 15 Nov 2025 10:01:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-679865
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                ba37645b-1855-4935-9368-1380eb8c0d66
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-wknnh                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-default-k8s-diff-port-679865                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-7j4zt                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-default-k8s-diff-port-679865             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-679865    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-qhrzp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-default-k8s-diff-port-679865             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nq268              0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-24grr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node default-k8s-diff-port-679865 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node default-k8s-diff-port-679865 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node default-k8s-diff-port-679865 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node default-k8s-diff-port-679865 event: Registered Node default-k8s-diff-port-679865 in Controller
	  Normal  NodeReady                94s                kubelet          Node default-k8s-diff-port-679865 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s (x8 over 52s)  kubelet          Node default-k8s-diff-port-679865 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 52s)  kubelet          Node default-k8s-diff-port-679865 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x8 over 52s)  kubelet          Node default-k8s-diff-port-679865 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node default-k8s-diff-port-679865 event: Registered Node default-k8s-diff-port-679865 in Controller
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [97ee6a21580e9b7957b3dcf359e11e5b217e1a40e090ac2ee838797b9fdce0cc] <==
	{"level":"warn","ts":"2025-11-15T10:02:05.683975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.692588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.699332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.706533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.717458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.728142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.736230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.743614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.751606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.758910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.765748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.782505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.790150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.797371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.845363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:27.119333Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.604441ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597075022744865 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-679865\" mod_revision:584 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-679865\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-679865\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:02:27.120121Z","caller":"traceutil/trace.go:172","msg":"trace[1580484632] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"133.204579ms","start":"2025-11-15T10:02:26.986883Z","end":"2025-11-15T10:02:27.120087Z","steps":["trace[1580484632] 'process raft request'  (duration: 17.344099ms)","trace[1580484632] 'compare'  (duration: 114.30447ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:02:39.221149Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.366213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:1 size:4635"}
	{"level":"info","ts":"2025-11-15T10:02:39.221225Z","caller":"traceutil/trace.go:172","msg":"trace[36722220] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:624; }","duration":"102.486026ms","start":"2025-11-15T10:02:39.118724Z","end":"2025-11-15T10:02:39.221210Z","steps":["trace[36722220] 'agreement among raft nodes before linearized reading'  (duration: 96.492893ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:02:39.221789Z","caller":"traceutil/trace.go:172","msg":"trace[1315234641] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"125.364569ms","start":"2025-11-15T10:02:39.096409Z","end":"2025-11-15T10:02:39.221773Z","steps":["trace[1315234641] 'process raft request'  (duration: 118.801853ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:02:39.376265Z","caller":"traceutil/trace.go:172","msg":"trace[1681869455] linearizableReadLoop","detail":"{readStateIndex:661; appliedIndex:661; }","duration":"145.369376ms","start":"2025-11-15T10:02:39.230868Z","end":"2025-11-15T10:02:39.376238Z","steps":["trace[1681869455] 'read index received'  (duration: 145.347008ms)","trace[1681869455] 'applied index is now lower than readState.Index'  (duration: 9.78µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:02:39.505078Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"274.193797ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-679865\" limit:1 ","response":"range_response_count:1 size:5995"}
	{"level":"info","ts":"2025-11-15T10:02:39.505148Z","caller":"traceutil/trace.go:172","msg":"trace[1616046677] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-679865; range_end:; response_count:1; response_revision:625; }","duration":"274.268265ms","start":"2025-11-15T10:02:39.230860Z","end":"2025-11-15T10:02:39.505128Z","steps":["trace[1616046677] 'agreement among raft nodes before linearized reading'  (duration: 145.482038ms)","trace[1616046677] 'range keys from in-memory index tree'  (duration: 128.635376ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:02:39.505445Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.987411ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597075022744994 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:618 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:4376 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:02:39.505513Z","caller":"traceutil/trace.go:172","msg":"trace[39070624] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"277.212266ms","start":"2025-11-15T10:02:39.228287Z","end":"2025-11-15T10:02:39.505499Z","steps":["trace[39070624] 'process raft request'  (duration: 147.976701ms)","trace[39070624] 'compare'  (duration: 128.640577ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:02:55 up  1:45,  0 user,  load average: 5.97, 3.85, 2.34
	Linux default-k8s-diff-port-679865 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b641794e62bea8b62572b411355c7f914cf43c7562c880b8c4edb09ed1669019] <==
	I1115 10:02:07.760810       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:02:07.761075       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:02:07.761298       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:02:07.761321       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:02:07.761350       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:02:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:02:07.963890       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:02:07.963957       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:02:07.963969       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:02:08.055843       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:02:38.056111       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:02:38.056115       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:02:38.056117       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:02:38.056278       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:02:39.164186       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:02:39.164236       1 metrics.go:72] Registering metrics
	I1115 10:02:39.164323       1 controller.go:711] "Syncing nftables rules"
	I1115 10:02:47.967511       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:02:47.967568       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0d7cda73760c10da27ca408e9cf406330d687485abfb473948a9af8b77257d98] <==
	I1115 10:02:06.452626       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:02:06.452910       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:02:06.453376       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:02:06.469852       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:02:06.483077       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:02:06.483141       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:02:06.483153       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:02:06.483160       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:02:06.483166       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:02:06.544688       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:02:06.544820       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:02:06.548806       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:02:06.551945       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:02:06.867918       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:02:06.910724       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:02:06.934198       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:02:06.943100       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:02:06.955247       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:02:06.993369       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.73.203"}
	I1115 10:02:07.013228       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.56.218"}
	I1115 10:02:07.353437       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:02:09.823747       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:02:10.222713       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:02:10.373775       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:02:10.373775       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [35c85b6acec1d4f4a155901044f09a0aad4f8ee6965e9a163bb790680c84c184] <==
	I1115 10:02:09.799478       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:02:09.801551       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:02:09.818986       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:02:09.820193       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:02:09.820220       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:02:09.820248       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:02:09.820251       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 10:02:09.820262       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:02:09.820295       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:02:09.820371       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:02:09.820433       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:02:09.820245       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:02:09.822221       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:02:09.825456       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:02:09.827803       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:02:09.827830       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:02:09.828952       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:02:09.829878       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 10:02:09.830369       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:02:09.831525       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:02:09.831647       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:02:09.848961       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:02:09.854879       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:02:09.854898       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:02:09.854907       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [32fe67745ed10f95d2f17825b82c788a9ff22653f0f715cfcd3760aa162dd40a] <==
	I1115 10:02:07.674080       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:02:07.743212       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:02:07.843952       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:02:07.844003       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 10:02:07.844135       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:02:07.868801       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:02:07.868863       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:02:07.873943       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:02:07.874273       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:02:07.874289       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:02:07.875636       1 config.go:200] "Starting service config controller"
	I1115 10:02:07.875656       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:02:07.875782       1 config.go:309] "Starting node config controller"
	I1115 10:02:07.875795       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:02:07.876175       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:02:07.876184       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:02:07.876205       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:02:07.876210       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:02:07.975867       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:02:07.975881       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:02:07.976568       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:02:07.976586       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9282ef22a41e45b12389c8dd7333237e091c1de52b31375f6caae152743253eb] <==
	I1115 10:02:06.441517       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:02:06.444791       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:02:06.444889       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:02:06.447432       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:02:06.447500       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1115 10:02:06.447790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 10:02:06.454742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:02:06.454929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:02:06.454995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:02:06.458891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:02:06.458995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:02:06.459127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:02:06.460493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:02:06.460638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:02:06.460696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:02:06.461049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:02:06.461171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:02:06.461263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:02:06.461279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:02:06.461311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:02:06.461572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:02:06.461693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:02:06.461792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:02:06.463023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1115 10:02:08.045717       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:02:07 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:07.152353     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac94ddc3-4b28-4ca8-a5d5-877120496ee0-xtables-lock\") pod \"kube-proxy-qhrzp\" (UID: \"ac94ddc3-4b28-4ca8-a5d5-877120496ee0\") " pod="kube-system/kube-proxy-qhrzp"
	Nov 15 10:02:10 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:10.470318     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jm8j\" (UniqueName: \"kubernetes.io/projected/08ef7e61-370b-4274-ae6e-e14b1a7bcfb8-kube-api-access-5jm8j\") pod \"dashboard-metrics-scraper-6ffb444bf9-nq268\" (UID: \"08ef7e61-370b-4274-ae6e-e14b1a7bcfb8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nq268"
	Nov 15 10:02:10 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:10.470357     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a1d81f82-7521-4a40-81a2-df544fe4a3a6-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-24grr\" (UID: \"a1d81f82-7521-4a40-81a2-df544fe4a3a6\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-24grr"
	Nov 15 10:02:10 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:10.470373     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/08ef7e61-370b-4274-ae6e-e14b1a7bcfb8-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-nq268\" (UID: \"08ef7e61-370b-4274-ae6e-e14b1a7bcfb8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nq268"
	Nov 15 10:02:10 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:10.470418     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7z4d\" (UniqueName: \"kubernetes.io/projected/a1d81f82-7521-4a40-81a2-df544fe4a3a6-kube-api-access-m7z4d\") pod \"kubernetes-dashboard-855c9754f9-24grr\" (UID: \"a1d81f82-7521-4a40-81a2-df544fe4a3a6\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-24grr"
	Nov 15 10:02:16 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:16.069414     734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-24grr" podStartSLOduration=1.397210005 podStartE2EDuration="6.069375515s" podCreationTimestamp="2025-11-15 10:02:10 +0000 UTC" firstStartedPulling="2025-11-15 10:02:10.773464349 +0000 UTC m=+6.905310137" lastFinishedPulling="2025-11-15 10:02:15.445629858 +0000 UTC m=+11.577475647" observedRunningTime="2025-11-15 10:02:16.068619143 +0000 UTC m=+12.200464941" watchObservedRunningTime="2025-11-15 10:02:16.069375515 +0000 UTC m=+12.201221318"
	Nov 15 10:02:19 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:19.053098     734 scope.go:117] "RemoveContainer" containerID="737dddeaa1527290ba65166ab35052f8f79681bfe63342aa5dcf2c4eb4d80576"
	Nov 15 10:02:20 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:20.057844     734 scope.go:117] "RemoveContainer" containerID="737dddeaa1527290ba65166ab35052f8f79681bfe63342aa5dcf2c4eb4d80576"
	Nov 15 10:02:20 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:20.057988     734 scope.go:117] "RemoveContainer" containerID="1521693d8618dc59d4ba30c241ef825b975b4d4c9091bf109fd2e77b539ee23c"
	Nov 15 10:02:20 default-k8s-diff-port-679865 kubelet[734]: E1115 10:02:20.058162     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nq268_kubernetes-dashboard(08ef7e61-370b-4274-ae6e-e14b1a7bcfb8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nq268" podUID="08ef7e61-370b-4274-ae6e-e14b1a7bcfb8"
	Nov 15 10:02:21 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:21.062373     734 scope.go:117] "RemoveContainer" containerID="1521693d8618dc59d4ba30c241ef825b975b4d4c9091bf109fd2e77b539ee23c"
	Nov 15 10:02:21 default-k8s-diff-port-679865 kubelet[734]: E1115 10:02:21.062607     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nq268_kubernetes-dashboard(08ef7e61-370b-4274-ae6e-e14b1a7bcfb8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nq268" podUID="08ef7e61-370b-4274-ae6e-e14b1a7bcfb8"
	Nov 15 10:02:23 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:23.488116     734 scope.go:117] "RemoveContainer" containerID="1521693d8618dc59d4ba30c241ef825b975b4d4c9091bf109fd2e77b539ee23c"
	Nov 15 10:02:23 default-k8s-diff-port-679865 kubelet[734]: E1115 10:02:23.488411     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nq268_kubernetes-dashboard(08ef7e61-370b-4274-ae6e-e14b1a7bcfb8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nq268" podUID="08ef7e61-370b-4274-ae6e-e14b1a7bcfb8"
	Nov 15 10:02:35 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:35.977197     734 scope.go:117] "RemoveContainer" containerID="1521693d8618dc59d4ba30c241ef825b975b4d4c9091bf109fd2e77b539ee23c"
	Nov 15 10:02:36 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:36.103100     734 scope.go:117] "RemoveContainer" containerID="1521693d8618dc59d4ba30c241ef825b975b4d4c9091bf109fd2e77b539ee23c"
	Nov 15 10:02:36 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:36.103352     734 scope.go:117] "RemoveContainer" containerID="d905bb086e1338902f1ad7c01443492f6ff71442781f3952bf847f849778f855"
	Nov 15 10:02:36 default-k8s-diff-port-679865 kubelet[734]: E1115 10:02:36.103658     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nq268_kubernetes-dashboard(08ef7e61-370b-4274-ae6e-e14b1a7bcfb8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nq268" podUID="08ef7e61-370b-4274-ae6e-e14b1a7bcfb8"
	Nov 15 10:02:38 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:38.112454     734 scope.go:117] "RemoveContainer" containerID="b0faf6ec7f64ca9800ab743771a847d1b3a7eb0f8db4a21455d9a12122d0372d"
	Nov 15 10:02:43 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:43.487929     734 scope.go:117] "RemoveContainer" containerID="d905bb086e1338902f1ad7c01443492f6ff71442781f3952bf847f849778f855"
	Nov 15 10:02:43 default-k8s-diff-port-679865 kubelet[734]: E1115 10:02:43.488103     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nq268_kubernetes-dashboard(08ef7e61-370b-4274-ae6e-e14b1a7bcfb8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nq268" podUID="08ef7e61-370b-4274-ae6e-e14b1a7bcfb8"
	Nov 15 10:02:52 default-k8s-diff-port-679865 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:02:52 default-k8s-diff-port-679865 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:02:52 default-k8s-diff-port-679865 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 15 10:02:52 default-k8s-diff-port-679865 systemd[1]: kubelet.service: Consumed 1.693s CPU time.
	
	
	==> kubernetes-dashboard [a12019be5efb212443fa3cd0d63f001ce894d1d08de1f00d096804524401e2cf] <==
	2025/11/15 10:02:15 Using namespace: kubernetes-dashboard
	2025/11/15 10:02:15 Using in-cluster config to connect to apiserver
	2025/11/15 10:02:15 Using secret token for csrf signing
	2025/11/15 10:02:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:02:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:02:15 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:02:15 Generating JWE encryption key
	2025/11/15 10:02:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:02:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:02:15 Initializing JWE encryption key from synchronized object
	2025/11/15 10:02:15 Creating in-cluster Sidecar client
	2025/11/15 10:02:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:02:15 Serving insecurely on HTTP port: 9090
	2025/11/15 10:02:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:02:15 Starting overwatch
	
	
	==> storage-provisioner [41c0918e1f139e5b9c79fee38e2fd7c53a8fdec337292205b4d7fa1e7985ddb2] <==
	I1115 10:02:38.184903       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:02:38.193385       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:02:38.193469       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:02:38.195866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:41.650569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:45.911411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:49.510080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:52.564204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:55.587007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:55.591322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:02:55.591510       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:02:55.591589       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"30a7c389-2335-4677-b5bc-b5dcc414ee67", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-679865_9ccb1e71-9c39-4f75-9ea0-3e954bb544e9 became leader
	I1115 10:02:55.591668       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-679865_9ccb1e71-9c39-4f75-9ea0-3e954bb544e9!
	W1115 10:02:55.594080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:55.597488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:02:55.692566       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-679865_9ccb1e71-9c39-4f75-9ea0-3e954bb544e9!
	
	
	==> storage-provisioner [b0faf6ec7f64ca9800ab743771a847d1b3a7eb0f8db4a21455d9a12122d0372d] <==
	I1115 10:02:07.337889       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:02:37.340868       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-679865 -n default-k8s-diff-port-679865
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-679865 -n default-k8s-diff-port-679865: exit status 2 (384.380007ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-679865 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-679865
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-679865:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2",
	        "Created": "2025-11-15T10:00:47.592632721Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 635645,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T10:01:55.739124371Z",
	            "FinishedAt": "2025-11-15T10:01:54.535544936Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2/hosts",
	        "LogPath": "/var/lib/docker/containers/0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2/0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2-json.log",
	        "Name": "/default-k8s-diff-port-679865",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-679865:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-679865",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b40f93214039e0eb314fb227de0740bcaa88d289f8d0d76ee0fc588ff3d33b2",
	                "LowerDir": "/var/lib/docker/overlay2/c9a47e17df51e0706eb06fed8bfcae68caad912487e3e04528cdc868dad95f4e-init/diff:/var/lib/docker/overlay2/b69775dd1a44971a16630a99a8f37d58097f503cd631c589833796a6664e2b72/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9a47e17df51e0706eb06fed8bfcae68caad912487e3e04528cdc868dad95f4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9a47e17df51e0706eb06fed8bfcae68caad912487e3e04528cdc868dad95f4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9a47e17df51e0706eb06fed8bfcae68caad912487e3e04528cdc868dad95f4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-679865",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-679865/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-679865",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-679865",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-679865",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "156d2e6f90aec03f84e3091ccb2cd6c454a72268eb4b7022e6f0b6c227d6fd7f",
	            "SandboxKey": "/var/run/docker/netns/156d2e6f90ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-679865": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0a7ab291fd7d7a6f03caec52507c3e2e0702cb6e9e4295365d7aba23864f9771",
	                    "EndpointID": "d33aa3f488ae2fbbef4ff7321fbefac99e554b785c43f8e115c621aa06ab5257",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "fe:2b:bd:5f:c3:e5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-679865",
	                        "0b40f9321403"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-679865 -n default-k8s-diff-port-679865
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-679865 -n default-k8s-diff-port-679865: exit status 2 (367.941563ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-679865 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-679865 logs -n 25: (1.539915784s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-034018 sudo docker system info                                                                                                                             │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cri-dockerd --version                                                                                                                          │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p auto-034018 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo containerd config dump                                                                                                                         │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ ssh     │ -p auto-034018 sudo crio config                                                                                                                                    │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ delete  │ -p auto-034018                                                                                                                                                     │ auto-034018                  │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ start   │ -p calico-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-034018                │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ image   │ embed-certs-430513 image list --format=json                                                                                                                        │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ pause   │ -p embed-certs-430513 --alsologtostderr -v=1                                                                                                                       │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ delete  │ -p embed-certs-430513                                                                                                                                              │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ delete  │ -p embed-certs-430513                                                                                                                                              │ embed-certs-430513           │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ start   │ -p custom-flannel-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-034018        │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	│ ssh     │ -p kindnet-034018 pgrep -a kubelet                                                                                                                                 │ kindnet-034018               │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ image   │ default-k8s-diff-port-679865 image list --format=json                                                                                                              │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │ 15 Nov 25 10:02 UTC │
	│ pause   │ -p default-k8s-diff-port-679865 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-679865 │ jenkins │ v1.37.0 │ 15 Nov 25 10:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 10:02:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 10:02:35.265541  649367 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:02:35.265685  649367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:35.265696  649367 out.go:374] Setting ErrFile to fd 2...
	I1115 10:02:35.265703  649367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:35.266453  649367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 10:02:35.267290  649367 out.go:368] Setting JSON to false
	I1115 10:02:35.268837  649367 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6296,"bootTime":1763194659,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:02:35.268949  649367 start.go:143] virtualization: kvm guest
	I1115 10:02:35.270822  649367 out.go:179] * [custom-flannel-034018] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:02:35.272526  649367 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 10:02:35.272554  649367 notify.go:221] Checking for updates...
	I1115 10:02:35.275314  649367 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:02:35.276558  649367 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:02:35.277888  649367 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 10:02:35.279068  649367 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:02:35.280252  649367 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:02:35.281908  649367 config.go:182] Loaded profile config "calico-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:35.282019  649367 config.go:182] Loaded profile config "default-k8s-diff-port-679865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:35.282092  649367 config.go:182] Loaded profile config "kindnet-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:35.282181  649367 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:02:35.310410  649367 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:02:35.310552  649367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:02:35.377856  649367 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:02:35.368049617 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:02:35.378022  649367 docker.go:319] overlay module found
	I1115 10:02:35.380011  649367 out.go:179] * Using the docker driver based on user configuration
	I1115 10:02:35.381429  649367 start.go:309] selected driver: docker
	I1115 10:02:35.381450  649367 start.go:930] validating driver "docker" against <nil>
	I1115 10:02:35.381468  649367 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:02:35.382147  649367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:02:35.445272  649367 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:02:35.434873974 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:02:35.445532  649367 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 10:02:35.445778  649367 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:02:35.447611  649367 out.go:179] * Using Docker driver with root privileges
	I1115 10:02:35.448801  649367 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1115 10:02:35.448834  649367 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1115 10:02:35.448920  649367 start.go:353] cluster config:
	{Name:custom-flannel-034018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-034018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:02:35.450260  649367 out.go:179] * Starting "custom-flannel-034018" primary control-plane node in "custom-flannel-034018" cluster
	I1115 10:02:35.451343  649367 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 10:02:35.452583  649367 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1115 10:02:35.453593  649367 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:02:35.453622  649367 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 10:02:35.453637  649367 cache.go:65] Caching tarball of preloaded images
	I1115 10:02:35.453686  649367 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 10:02:35.453740  649367 preload.go:238] Found /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1115 10:02:35.453757  649367 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1115 10:02:35.453870  649367 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/config.json ...
	I1115 10:02:35.453895  649367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/config.json: {Name:mkef5e2dcd913c15d1ebc8389cefd875c35f1fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:35.476800  649367 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1115 10:02:35.476822  649367 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1115 10:02:35.476838  649367 cache.go:243] Successfully downloaded all kic artifacts
	I1115 10:02:35.476871  649367 start.go:360] acquireMachinesLock for custom-flannel-034018: {Name:mk4f15785111481e33d475ca13a1243eff9b873a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1115 10:02:35.476995  649367 start.go:364] duration metric: took 99.687µs to acquireMachinesLock for "custom-flannel-034018"
	I1115 10:02:35.477027  649367 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-034018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-034018 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:02:35.477118  649367 start.go:125] createHost starting for "" (driver="docker")
	W1115 10:02:31.214615  635342 pod_ready.go:104] pod "coredns-66bc5c9577-wknnh" is not "Ready", error: <nil>
	W1115 10:02:33.713645  635342 pod_ready.go:104] pod "coredns-66bc5c9577-wknnh" is not "Ready", error: <nil>
	I1115 10:02:32.589887  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:02:32.609503  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/calico-034018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1115 10:02:32.628339  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/calico-034018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1115 10:02:32.648486  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/calico-034018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:02:32.668564  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/calico-034018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1115 10:02:32.689944  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:02:32.722060  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 10:02:32.743088  644840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 10:02:32.763297  644840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:02:32.777113  644840 ssh_runner.go:195] Run: openssl version
	I1115 10:02:32.783950  644840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:02:32.792687  644840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:02:32.796649  644840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:02:32.796708  644840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:02:32.831764  644840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:02:32.841081  644840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 10:02:32.850213  644840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 10:02:32.854192  644840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 10:02:32.854253  644840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 10:02:32.891611  644840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 10:02:32.902336  644840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 10:02:32.912294  644840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 10:02:32.916476  644840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 10:02:32.916547  644840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 10:02:32.952292  644840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:02:32.962175  644840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:02:32.966152  644840 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:02:32.966209  644840 kubeadm.go:401] StartCluster: {Name:calico-034018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-034018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:02:32.966282  644840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:02:32.966324  644840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:02:32.994915  644840 cri.go:89] found id: ""
	I1115 10:02:32.994998  644840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:02:33.003677  644840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:02:33.012123  644840 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:02:33.012208  644840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:02:33.020447  644840 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:02:33.020465  644840 kubeadm.go:158] found existing configuration files:
	
	I1115 10:02:33.020513  644840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:02:33.028241  644840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:02:33.028298  644840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:02:33.037048  644840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:02:33.046060  644840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:02:33.046123  644840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:02:33.054791  644840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:02:33.064606  644840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:02:33.064672  644840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:02:33.073644  644840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:02:33.082193  644840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:02:33.082248  644840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:02:33.090626  644840 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:02:33.152361  644840 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:02:33.213232  644840 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1115 10:02:33.823730  636459 node_ready.go:57] node "kindnet-034018" has "Ready":"False" status (will retry)
	W1115 10:02:35.824154  636459 node_ready.go:57] node "kindnet-034018" has "Ready":"False" status (will retry)
	W1115 10:02:37.824454  636459 node_ready.go:57] node "kindnet-034018" has "Ready":"False" status (will retry)
	I1115 10:02:35.479073  649367 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1115 10:02:35.479333  649367 start.go:159] libmachine.API.Create for "custom-flannel-034018" (driver="docker")
	I1115 10:02:35.479408  649367 client.go:173] LocalClient.Create starting
	I1115 10:02:35.479520  649367 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem
	I1115 10:02:35.479569  649367 main.go:143] libmachine: Decoding PEM data...
	I1115 10:02:35.479594  649367 main.go:143] libmachine: Parsing certificate...
	I1115 10:02:35.479676  649367 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem
	I1115 10:02:35.479704  649367 main.go:143] libmachine: Decoding PEM data...
	I1115 10:02:35.479720  649367 main.go:143] libmachine: Parsing certificate...
	I1115 10:02:35.480092  649367 cli_runner.go:164] Run: docker network inspect custom-flannel-034018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1115 10:02:35.500373  649367 cli_runner.go:211] docker network inspect custom-flannel-034018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1115 10:02:35.500480  649367 network_create.go:284] running [docker network inspect custom-flannel-034018] to gather additional debugging logs...
	I1115 10:02:35.500505  649367 cli_runner.go:164] Run: docker network inspect custom-flannel-034018
	W1115 10:02:35.519840  649367 cli_runner.go:211] docker network inspect custom-flannel-034018 returned with exit code 1
	I1115 10:02:35.519898  649367 network_create.go:287] error running [docker network inspect custom-flannel-034018]: docker network inspect custom-flannel-034018: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-034018 not found
	I1115 10:02:35.519922  649367 network_create.go:289] output of [docker network inspect custom-flannel-034018]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-034018 not found
	
	** /stderr **
	I1115 10:02:35.520063  649367 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:02:35.539803  649367 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a8fb985664d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:ab:70:dd:9f:65} reservation:<nil>}
	I1115 10:02:35.540569  649367 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cc9c79f9c19e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:9a:52:90:2e:14} reservation:<nil>}
	I1115 10:02:35.541091  649367 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-309565720ebf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:66:38:13:6a:5d} reservation:<nil>}
	I1115 10:02:35.542262  649367 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eea7b0}
	I1115 10:02:35.542305  649367 network_create.go:124] attempt to create docker network custom-flannel-034018 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1115 10:02:35.542370  649367 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-034018 custom-flannel-034018
	I1115 10:02:35.592669  649367 network_create.go:108] docker network custom-flannel-034018 192.168.76.0/24 created
	I1115 10:02:35.592701  649367 kic.go:121] calculated static IP "192.168.76.2" for the "custom-flannel-034018" container
	I1115 10:02:35.592774  649367 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1115 10:02:35.610664  649367 cli_runner.go:164] Run: docker volume create custom-flannel-034018 --label name.minikube.sigs.k8s.io=custom-flannel-034018 --label created_by.minikube.sigs.k8s.io=true
	I1115 10:02:35.630035  649367 oci.go:103] Successfully created a docker volume custom-flannel-034018
	I1115 10:02:35.630130  649367 cli_runner.go:164] Run: docker run --rm --name custom-flannel-034018-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-034018 --entrypoint /usr/bin/test -v custom-flannel-034018:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1115 10:02:36.034487  649367 oci.go:107] Successfully prepared a docker volume custom-flannel-034018
	I1115 10:02:36.034568  649367 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:02:36.034581  649367 kic.go:194] Starting extracting preloaded images to volume ...
	I1115 10:02:36.034643  649367 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-034018:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1115 10:02:35.714140  635342 pod_ready.go:104] pod "coredns-66bc5c9577-wknnh" is not "Ready", error: <nil>
	W1115 10:02:38.213504  635342 pod_ready.go:104] pod "coredns-66bc5c9577-wknnh" is not "Ready", error: <nil>
	I1115 10:02:39.226488  635342 pod_ready.go:94] pod "coredns-66bc5c9577-wknnh" is "Ready"
	I1115 10:02:39.226518  635342 pod_ready.go:86] duration metric: took 31.518592512s for pod "coredns-66bc5c9577-wknnh" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:39.229611  635342 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:39.512764  635342 pod_ready.go:94] pod "etcd-default-k8s-diff-port-679865" is "Ready"
	I1115 10:02:39.512795  635342 pod_ready.go:86] duration metric: took 283.155084ms for pod "etcd-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:39.515158  635342 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:39.519946  635342 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-679865" is "Ready"
	I1115 10:02:39.519976  635342 pod_ready.go:86] duration metric: took 4.795161ms for pod "kube-apiserver-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:39.522139  635342 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:39.526209  635342 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-679865" is "Ready"
	I1115 10:02:39.526231  635342 pod_ready.go:86] duration metric: took 4.071228ms for pod "kube-controller-manager-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:39.709576  635342 pod_ready.go:83] waiting for pod "kube-proxy-qhrzp" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:40.012248  635342 pod_ready.go:94] pod "kube-proxy-qhrzp" is "Ready"
	I1115 10:02:40.012275  635342 pod_ready.go:86] duration metric: took 302.672043ms for pod "kube-proxy-qhrzp" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:40.211612  635342 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:40.613299  635342 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-679865" is "Ready"
	I1115 10:02:40.613329  635342 pod_ready.go:86] duration metric: took 401.68638ms for pod "kube-scheduler-default-k8s-diff-port-679865" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:40.613344  635342 pod_ready.go:40] duration metric: took 32.96835097s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:02:40.676570  635342 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:02:40.678784  635342 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-679865" cluster and "default" namespace by default
	I1115 10:02:39.509359  636459 node_ready.go:49] node "kindnet-034018" is "Ready"
	I1115 10:02:39.509410  636459 node_ready.go:38] duration metric: took 12.189223332s for node "kindnet-034018" to be "Ready" ...
	I1115 10:02:39.509429  636459 api_server.go:52] waiting for apiserver process to appear ...
	I1115 10:02:39.509486  636459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 10:02:39.525100  636459 api_server.go:72] duration metric: took 13.192925177s to wait for apiserver process to appear ...
	I1115 10:02:39.525128  636459 api_server.go:88] waiting for apiserver healthz status ...
	I1115 10:02:39.525152  636459 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1115 10:02:39.529487  636459 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1115 10:02:39.530435  636459 api_server.go:141] control plane version: v1.34.1
	I1115 10:02:39.530460  636459 api_server.go:131] duration metric: took 5.324809ms to wait for apiserver health ...
	I1115 10:02:39.530468  636459 system_pods.go:43] waiting for kube-system pods to appear ...
	I1115 10:02:39.534134  636459 system_pods.go:59] 8 kube-system pods found
	I1115 10:02:39.534180  636459 system_pods.go:61] "coredns-66bc5c9577-wztnb" [d1380b87-fde4-45c8-9981-693d61cf7cd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:02:39.534190  636459 system_pods.go:61] "etcd-kindnet-034018" [c417ea42-17ca-4157-9325-798a7021fa82] Running
	I1115 10:02:39.534198  636459 system_pods.go:61] "kindnet-w8lq4" [889f4ca2-eb36-4c3a-b40f-058c8814a6af] Running
	I1115 10:02:39.534203  636459 system_pods.go:61] "kube-apiserver-kindnet-034018" [9a28e892-f55f-4a3f-a444-53a9b40a4f94] Running
	I1115 10:02:39.534208  636459 system_pods.go:61] "kube-controller-manager-kindnet-034018" [5b825fd8-0e25-41e3-a167-01929fd3db52] Running
	I1115 10:02:39.534212  636459 system_pods.go:61] "kube-proxy-7vzzl" [7147322a-cfc7-444d-be65-a6794547494c] Running
	I1115 10:02:39.534218  636459 system_pods.go:61] "kube-scheduler-kindnet-034018" [6a490137-0b41-4830-8604-d9900a91e8b4] Running
	I1115 10:02:39.534222  636459 system_pods.go:61] "storage-provisioner" [5fab2882-6c99-44ce-9142-546caf7319b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:02:39.534230  636459 system_pods.go:74] duration metric: took 3.756652ms to wait for pod list to return data ...
	I1115 10:02:39.534240  636459 default_sa.go:34] waiting for default service account to be created ...
	I1115 10:02:39.537078  636459 default_sa.go:45] found service account: "default"
	I1115 10:02:39.537099  636459 default_sa.go:55] duration metric: took 2.852959ms for default service account to be created ...
	I1115 10:02:39.537107  636459 system_pods.go:116] waiting for k8s-apps to be running ...
	I1115 10:02:39.540496  636459 system_pods.go:86] 8 kube-system pods found
	I1115 10:02:39.540525  636459 system_pods.go:89] "coredns-66bc5c9577-wztnb" [d1380b87-fde4-45c8-9981-693d61cf7cd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:02:39.540536  636459 system_pods.go:89] "etcd-kindnet-034018" [c417ea42-17ca-4157-9325-798a7021fa82] Running
	I1115 10:02:39.540545  636459 system_pods.go:89] "kindnet-w8lq4" [889f4ca2-eb36-4c3a-b40f-058c8814a6af] Running
	I1115 10:02:39.540549  636459 system_pods.go:89] "kube-apiserver-kindnet-034018" [9a28e892-f55f-4a3f-a444-53a9b40a4f94] Running
	I1115 10:02:39.540552  636459 system_pods.go:89] "kube-controller-manager-kindnet-034018" [5b825fd8-0e25-41e3-a167-01929fd3db52] Running
	I1115 10:02:39.540556  636459 system_pods.go:89] "kube-proxy-7vzzl" [7147322a-cfc7-444d-be65-a6794547494c] Running
	I1115 10:02:39.540560  636459 system_pods.go:89] "kube-scheduler-kindnet-034018" [6a490137-0b41-4830-8604-d9900a91e8b4] Running
	I1115 10:02:39.540570  636459 system_pods.go:89] "storage-provisioner" [5fab2882-6c99-44ce-9142-546caf7319b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:02:39.540596  636459 retry.go:31] will retry after 241.274525ms: missing components: kube-dns
	I1115 10:02:39.785835  636459 system_pods.go:86] 8 kube-system pods found
	I1115 10:02:39.785874  636459 system_pods.go:89] "coredns-66bc5c9577-wztnb" [d1380b87-fde4-45c8-9981-693d61cf7cd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:02:39.785884  636459 system_pods.go:89] "etcd-kindnet-034018" [c417ea42-17ca-4157-9325-798a7021fa82] Running
	I1115 10:02:39.785892  636459 system_pods.go:89] "kindnet-w8lq4" [889f4ca2-eb36-4c3a-b40f-058c8814a6af] Running
	I1115 10:02:39.785898  636459 system_pods.go:89] "kube-apiserver-kindnet-034018" [9a28e892-f55f-4a3f-a444-53a9b40a4f94] Running
	I1115 10:02:39.785903  636459 system_pods.go:89] "kube-controller-manager-kindnet-034018" [5b825fd8-0e25-41e3-a167-01929fd3db52] Running
	I1115 10:02:39.785910  636459 system_pods.go:89] "kube-proxy-7vzzl" [7147322a-cfc7-444d-be65-a6794547494c] Running
	I1115 10:02:39.785914  636459 system_pods.go:89] "kube-scheduler-kindnet-034018" [6a490137-0b41-4830-8604-d9900a91e8b4] Running
	I1115 10:02:39.785921  636459 system_pods.go:89] "storage-provisioner" [5fab2882-6c99-44ce-9142-546caf7319b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:02:39.785944  636459 retry.go:31] will retry after 277.873372ms: missing components: kube-dns
	I1115 10:02:40.256195  636459 system_pods.go:86] 8 kube-system pods found
	I1115 10:02:40.256238  636459 system_pods.go:89] "coredns-66bc5c9577-wztnb" [d1380b87-fde4-45c8-9981-693d61cf7cd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:02:40.256246  636459 system_pods.go:89] "etcd-kindnet-034018" [c417ea42-17ca-4157-9325-798a7021fa82] Running
	I1115 10:02:40.256253  636459 system_pods.go:89] "kindnet-w8lq4" [889f4ca2-eb36-4c3a-b40f-058c8814a6af] Running
	I1115 10:02:40.256258  636459 system_pods.go:89] "kube-apiserver-kindnet-034018" [9a28e892-f55f-4a3f-a444-53a9b40a4f94] Running
	I1115 10:02:40.256264  636459 system_pods.go:89] "kube-controller-manager-kindnet-034018" [5b825fd8-0e25-41e3-a167-01929fd3db52] Running
	I1115 10:02:40.256270  636459 system_pods.go:89] "kube-proxy-7vzzl" [7147322a-cfc7-444d-be65-a6794547494c] Running
	I1115 10:02:40.256275  636459 system_pods.go:89] "kube-scheduler-kindnet-034018" [6a490137-0b41-4830-8604-d9900a91e8b4] Running
	I1115 10:02:40.256291  636459 system_pods.go:89] "storage-provisioner" [5fab2882-6c99-44ce-9142-546caf7319b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:02:40.256316  636459 retry.go:31] will retry after 437.871457ms: missing components: kube-dns
	I1115 10:02:40.704496  636459 system_pods.go:86] 8 kube-system pods found
	I1115 10:02:40.704548  636459 system_pods.go:89] "coredns-66bc5c9577-wztnb" [d1380b87-fde4-45c8-9981-693d61cf7cd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1115 10:02:40.704557  636459 system_pods.go:89] "etcd-kindnet-034018" [c417ea42-17ca-4157-9325-798a7021fa82] Running
	I1115 10:02:40.704567  636459 system_pods.go:89] "kindnet-w8lq4" [889f4ca2-eb36-4c3a-b40f-058c8814a6af] Running
	I1115 10:02:40.704580  636459 system_pods.go:89] "kube-apiserver-kindnet-034018" [9a28e892-f55f-4a3f-a444-53a9b40a4f94] Running
	I1115 10:02:40.704587  636459 system_pods.go:89] "kube-controller-manager-kindnet-034018" [5b825fd8-0e25-41e3-a167-01929fd3db52] Running
	I1115 10:02:40.704600  636459 system_pods.go:89] "kube-proxy-7vzzl" [7147322a-cfc7-444d-be65-a6794547494c] Running
	I1115 10:02:40.704606  636459 system_pods.go:89] "kube-scheduler-kindnet-034018" [6a490137-0b41-4830-8604-d9900a91e8b4] Running
	I1115 10:02:40.704618  636459 system_pods.go:89] "storage-provisioner" [5fab2882-6c99-44ce-9142-546caf7319b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1115 10:02:40.704637  636459 retry.go:31] will retry after 444.592689ms: missing components: kube-dns
	I1115 10:02:41.154187  636459 system_pods.go:86] 8 kube-system pods found
	I1115 10:02:41.154223  636459 system_pods.go:89] "coredns-66bc5c9577-wztnb" [d1380b87-fde4-45c8-9981-693d61cf7cd0] Running
	I1115 10:02:41.154231  636459 system_pods.go:89] "etcd-kindnet-034018" [c417ea42-17ca-4157-9325-798a7021fa82] Running
	I1115 10:02:41.154237  636459 system_pods.go:89] "kindnet-w8lq4" [889f4ca2-eb36-4c3a-b40f-058c8814a6af] Running
	I1115 10:02:41.154241  636459 system_pods.go:89] "kube-apiserver-kindnet-034018" [9a28e892-f55f-4a3f-a444-53a9b40a4f94] Running
	I1115 10:02:41.154246  636459 system_pods.go:89] "kube-controller-manager-kindnet-034018" [5b825fd8-0e25-41e3-a167-01929fd3db52] Running
	I1115 10:02:41.154251  636459 system_pods.go:89] "kube-proxy-7vzzl" [7147322a-cfc7-444d-be65-a6794547494c] Running
	I1115 10:02:41.154258  636459 system_pods.go:89] "kube-scheduler-kindnet-034018" [6a490137-0b41-4830-8604-d9900a91e8b4] Running
	I1115 10:02:41.154263  636459 system_pods.go:89] "storage-provisioner" [5fab2882-6c99-44ce-9142-546caf7319b0] Running
	I1115 10:02:41.154276  636459 system_pods.go:126] duration metric: took 1.617160811s to wait for k8s-apps to be running ...
	I1115 10:02:41.154286  636459 system_svc.go:44] waiting for kubelet service to be running ....
	I1115 10:02:41.154338  636459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 10:02:41.174088  636459 system_svc.go:56] duration metric: took 19.774098ms WaitForService to wait for kubelet
	I1115 10:02:41.174132  636459 kubeadm.go:587] duration metric: took 14.841962058s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1115 10:02:41.174158  636459 node_conditions.go:102] verifying NodePressure condition ...
	I1115 10:02:41.178568  636459 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1115 10:02:41.178742  636459 node_conditions.go:123] node cpu capacity is 8
	I1115 10:02:41.178762  636459 node_conditions.go:105] duration metric: took 4.599139ms to run NodePressure ...
	I1115 10:02:41.178777  636459 start.go:242] waiting for startup goroutines ...
	I1115 10:02:41.178813  636459 start.go:247] waiting for cluster config update ...
	I1115 10:02:41.178827  636459 start.go:256] writing updated cluster config ...
	I1115 10:02:41.179104  636459 ssh_runner.go:195] Run: rm -f paused
	I1115 10:02:41.184181  636459 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:02:41.190294  636459 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wztnb" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.196375  636459 pod_ready.go:94] pod "coredns-66bc5c9577-wztnb" is "Ready"
	I1115 10:02:41.196440  636459 pod_ready.go:86] duration metric: took 6.115378ms for pod "coredns-66bc5c9577-wztnb" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.199523  636459 pod_ready.go:83] waiting for pod "etcd-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.204842  636459 pod_ready.go:94] pod "etcd-kindnet-034018" is "Ready"
	I1115 10:02:41.204871  636459 pod_ready.go:86] duration metric: took 5.324021ms for pod "etcd-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.207644  636459 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.212751  636459 pod_ready.go:94] pod "kube-apiserver-kindnet-034018" is "Ready"
	I1115 10:02:41.212787  636459 pod_ready.go:86] duration metric: took 5.114219ms for pod "kube-apiserver-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.215274  636459 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.590268  636459 pod_ready.go:94] pod "kube-controller-manager-kindnet-034018" is "Ready"
	I1115 10:02:41.590301  636459 pod_ready.go:86] duration metric: took 375.00087ms for pod "kube-controller-manager-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:41.790369  636459 pod_ready.go:83] waiting for pod "kube-proxy-7vzzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:42.189644  636459 pod_ready.go:94] pod "kube-proxy-7vzzl" is "Ready"
	I1115 10:02:42.189678  636459 pod_ready.go:86] duration metric: took 399.24004ms for pod "kube-proxy-7vzzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:42.388910  636459 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:42.788634  636459 pod_ready.go:94] pod "kube-scheduler-kindnet-034018" is "Ready"
	I1115 10:02:42.788668  636459 pod_ready.go:86] duration metric: took 399.721581ms for pod "kube-scheduler-kindnet-034018" in "kube-system" namespace to be "Ready" or be gone ...
	I1115 10:02:42.788684  636459 pod_ready.go:40] duration metric: took 1.604374209s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1115 10:02:42.845445  636459 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1115 10:02:42.847648  636459 out.go:179] * Done! kubectl is now configured to use "kindnet-034018" cluster and "default" namespace by default
	I1115 10:02:40.637434  649367 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-034018:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.602714234s)
	I1115 10:02:40.637481  649367 kic.go:203] duration metric: took 4.602893214s to extract preloaded images to volume ...
	W1115 10:02:40.637571  649367 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1115 10:02:40.637623  649367 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1115 10:02:40.637671  649367 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1115 10:02:40.706803  649367 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-034018 --name custom-flannel-034018 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-034018 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-034018 --network custom-flannel-034018 --ip 192.168.76.2 --volume custom-flannel-034018:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1115 10:02:41.105173  649367 cli_runner.go:164] Run: docker container inspect custom-flannel-034018 --format={{.State.Running}}
	I1115 10:02:41.133130  649367 cli_runner.go:164] Run: docker container inspect custom-flannel-034018 --format={{.State.Status}}
	I1115 10:02:41.161410  649367 cli_runner.go:164] Run: docker exec custom-flannel-034018 stat /var/lib/dpkg/alternatives/iptables
	I1115 10:02:41.224450  649367 oci.go:144] the created container "custom-flannel-034018" has a running status.
	I1115 10:02:41.224486  649367 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/custom-flannel-034018/id_rsa...
	I1115 10:02:41.950140  649367 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21895-355485/.minikube/machines/custom-flannel-034018/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1115 10:02:41.980824  649367 cli_runner.go:164] Run: docker container inspect custom-flannel-034018 --format={{.State.Status}}
	I1115 10:02:42.001343  649367 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1115 10:02:42.001366  649367 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-034018 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1115 10:02:42.051657  649367 cli_runner.go:164] Run: docker container inspect custom-flannel-034018 --format={{.State.Status}}
	I1115 10:02:42.070893  649367 machine.go:94] provisionDockerMachine start ...
	I1115 10:02:42.070990  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:42.091910  649367 main.go:143] libmachine: Using SSH client type: native
	I1115 10:02:42.092295  649367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1115 10:02:42.092320  649367 main.go:143] libmachine: About to run SSH command:
	hostname
	I1115 10:02:42.225535  649367 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-034018
	
	I1115 10:02:42.225566  649367 ubuntu.go:182] provisioning hostname "custom-flannel-034018"
	I1115 10:02:42.225620  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:42.247681  649367 main.go:143] libmachine: Using SSH client type: native
	I1115 10:02:42.247965  649367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1115 10:02:42.247983  649367 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-034018 && echo "custom-flannel-034018" | sudo tee /etc/hostname
	I1115 10:02:42.401785  649367 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-034018
	
	I1115 10:02:42.401865  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:42.425094  649367 main.go:143] libmachine: Using SSH client type: native
	I1115 10:02:42.425472  649367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1115 10:02:42.425498  649367 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-034018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-034018/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-034018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1115 10:02:42.567686  649367 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1115 10:02:42.567718  649367 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21895-355485/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-355485/.minikube}
	I1115 10:02:42.567741  649367 ubuntu.go:190] setting up certificates
	I1115 10:02:42.567753  649367 provision.go:84] configureAuth start
	I1115 10:02:42.567810  649367 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-034018
	I1115 10:02:42.592209  649367 provision.go:143] copyHostCerts
	I1115 10:02:42.592272  649367 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem, removing ...
	I1115 10:02:42.592284  649367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem
	I1115 10:02:42.592362  649367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/ca.pem (1082 bytes)
	I1115 10:02:42.592528  649367 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem, removing ...
	I1115 10:02:42.592544  649367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem
	I1115 10:02:42.592590  649367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/cert.pem (1123 bytes)
	I1115 10:02:42.592729  649367 exec_runner.go:144] found /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem, removing ...
	I1115 10:02:42.592738  649367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem
	I1115 10:02:42.592784  649367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-355485/.minikube/key.pem (1679 bytes)
	I1115 10:02:42.592888  649367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-034018 san=[127.0.0.1 192.168.76.2 custom-flannel-034018 localhost minikube]
	I1115 10:02:42.852117  649367 provision.go:177] copyRemoteCerts
	I1115 10:02:42.852188  649367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1115 10:02:42.852241  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:42.875649  649367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/custom-flannel-034018/id_rsa Username:docker}
	I1115 10:02:42.977692  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1115 10:02:42.998053  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1115 10:02:43.016603  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1115 10:02:43.036718  649367 provision.go:87] duration metric: took 468.860558ms to configureAuth
	I1115 10:02:43.036752  649367 ubuntu.go:206] setting minikube options for container-runtime
	I1115 10:02:43.037668  649367 config.go:182] Loaded profile config "custom-flannel-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:43.037824  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:43.060967  649367 main.go:143] libmachine: Using SSH client type: native
	I1115 10:02:43.061240  649367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33494 <nil> <nil>}
	I1115 10:02:43.061257  649367 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1115 10:02:43.365912  649367 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1115 10:02:43.365952  649367 machine.go:97] duration metric: took 1.295033439s to provisionDockerMachine
	I1115 10:02:43.365967  649367 client.go:176] duration metric: took 7.886547532s to LocalClient.Create
	I1115 10:02:43.365981  649367 start.go:167] duration metric: took 7.886647961s to libmachine.API.Create "custom-flannel-034018"
	I1115 10:02:43.365992  649367 start.go:293] postStartSetup for "custom-flannel-034018" (driver="docker")
	I1115 10:02:43.366006  649367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1115 10:02:43.366097  649367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1115 10:02:43.366147  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:43.388907  649367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/custom-flannel-034018/id_rsa Username:docker}
	I1115 10:02:43.491404  649367 ssh_runner.go:195] Run: cat /etc/os-release
	I1115 10:02:43.495642  649367 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1115 10:02:43.495678  649367 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1115 10:02:43.495691  649367 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/addons for local assets ...
	I1115 10:02:43.495751  649367 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-355485/.minikube/files for local assets ...
	I1115 10:02:43.495882  649367 filesync.go:149] local asset: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem -> 3590632.pem in /etc/ssl/certs
	I1115 10:02:43.496015  649367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1115 10:02:43.504790  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:02:43.523983  649367 start.go:296] duration metric: took 157.975571ms for postStartSetup
	I1115 10:02:43.524428  649367 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-034018
	I1115 10:02:43.544654  649367 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/config.json ...
	I1115 10:02:43.544927  649367 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 10:02:43.544982  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:43.563157  649367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/custom-flannel-034018/id_rsa Username:docker}
	I1115 10:02:43.654817  649367 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1115 10:02:43.659896  649367 start.go:128] duration metric: took 8.182762549s to createHost
	I1115 10:02:43.659927  649367 start.go:83] releasing machines lock for "custom-flannel-034018", held for 8.182914742s
	I1115 10:02:43.660011  649367 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-034018
	I1115 10:02:43.678527  649367 ssh_runner.go:195] Run: cat /version.json
	I1115 10:02:43.678580  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:43.678623  649367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1115 10:02:43.678707  649367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-034018
	I1115 10:02:43.698985  649367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/custom-flannel-034018/id_rsa Username:docker}
	I1115 10:02:43.698989  649367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33494 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/custom-flannel-034018/id_rsa Username:docker}
	I1115 10:02:43.846621  649367 ssh_runner.go:195] Run: systemctl --version
	I1115 10:02:43.853928  649367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1115 10:02:43.890879  649367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1115 10:02:43.896123  649367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1115 10:02:43.896192  649367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1115 10:02:43.921624  649367 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1115 10:02:43.921648  649367 start.go:496] detecting cgroup driver to use...
	I1115 10:02:43.921695  649367 detect.go:190] detected "systemd" cgroup driver on host os
	I1115 10:02:43.921744  649367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1115 10:02:43.938119  649367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1115 10:02:43.951157  649367 docker.go:218] disabling cri-docker service (if available) ...
	I1115 10:02:43.951216  649367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1115 10:02:43.968501  649367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1115 10:02:43.988434  649367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1115 10:02:44.071514  649367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1115 10:02:44.166736  649367 docker.go:234] disabling docker service ...
	I1115 10:02:44.166812  649367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1115 10:02:44.185177  649367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1115 10:02:44.197791  649367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1115 10:02:44.293551  649367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1115 10:02:44.397948  649367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1115 10:02:44.414107  649367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1115 10:02:44.431161  649367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1115 10:02:44.431221  649367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:02:44.443162  649367 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1115 10:02:44.443235  649367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:02:44.453633  649367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:02:44.464869  649367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:02:44.474778  649367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1115 10:02:44.484524  649367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:02:44.494716  649367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:02:44.510816  649367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1115 10:02:44.521212  649367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1115 10:02:44.529754  649367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1115 10:02:44.539321  649367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:02:44.636253  649367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1115 10:02:44.760268  649367 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1115 10:02:44.760423  649367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1115 10:02:44.765346  649367 start.go:564] Will wait 60s for crictl version
	I1115 10:02:44.765429  649367 ssh_runner.go:195] Run: which crictl
	I1115 10:02:44.769780  649367 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1115 10:02:44.798132  649367 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1115 10:02:44.798222  649367 ssh_runner.go:195] Run: crio --version
	I1115 10:02:44.831619  649367 ssh_runner.go:195] Run: crio --version
	I1115 10:02:44.864792  649367 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1115 10:02:44.865852  649367 cli_runner.go:164] Run: docker network inspect custom-flannel-034018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1115 10:02:44.884591  649367 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1115 10:02:44.889135  649367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:02:44.899346  649367 kubeadm.go:884] updating cluster {Name:custom-flannel-034018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-034018 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1115 10:02:44.899491  649367 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 10:02:44.899544  649367 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:02:44.931151  649367 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:02:44.931177  649367 crio.go:433] Images already preloaded, skipping extraction
	I1115 10:02:44.931232  649367 ssh_runner.go:195] Run: sudo crictl images --output json
	I1115 10:02:44.956373  649367 crio.go:514] all images are preloaded for cri-o runtime.
	I1115 10:02:44.956408  649367 cache_images.go:86] Images are preloaded, skipping loading
	I1115 10:02:44.956419  649367 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1115 10:02:44.956504  649367 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-034018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-034018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1115 10:02:44.956573  649367 ssh_runner.go:195] Run: crio config
	I1115 10:02:45.005579  649367 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1115 10:02:45.005624  649367 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1115 10:02:45.005655  649367 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-034018 NodeName:custom-flannel-034018 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1115 10:02:45.005822  649367 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-034018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1115 10:02:45.005893  649367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1115 10:02:45.015267  649367 binaries.go:51] Found k8s binaries, skipping transfer
	I1115 10:02:45.015340  649367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1115 10:02:45.024303  649367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1115 10:02:45.039846  649367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1115 10:02:45.056540  649367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1115 10:02:45.069791  649367 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1115 10:02:45.073798  649367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1115 10:02:45.084867  649367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:02:45.168545  649367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:02:45.199949  649367 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018 for IP: 192.168.76.2
	I1115 10:02:45.199976  649367 certs.go:195] generating shared ca certs ...
	I1115 10:02:45.199997  649367 certs.go:227] acquiring lock for ca certs: {Name:mk99d69aa10e850187f417a6cc7689982d9e79de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:45.200174  649367 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key
	I1115 10:02:45.200215  649367 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key
	I1115 10:02:45.200224  649367 certs.go:257] generating profile certs ...
	I1115 10:02:45.200276  649367 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/client.key
	I1115 10:02:45.200296  649367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/client.crt with IP's: []
	I1115 10:02:46.397842  644840 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1115 10:02:46.397916  644840 kubeadm.go:319] [preflight] Running pre-flight checks
	I1115 10:02:46.398047  644840 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1115 10:02:46.398161  644840 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1115 10:02:46.398212  644840 kubeadm.go:319] OS: Linux
	I1115 10:02:46.398252  644840 kubeadm.go:319] CGROUPS_CPU: enabled
	I1115 10:02:46.398341  644840 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1115 10:02:46.398453  644840 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1115 10:02:46.398537  644840 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1115 10:02:46.398610  644840 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1115 10:02:46.398676  644840 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1115 10:02:46.398753  644840 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1115 10:02:46.398807  644840 kubeadm.go:319] CGROUPS_IO: enabled
	I1115 10:02:46.398867  644840 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1115 10:02:46.398960  644840 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1115 10:02:46.399039  644840 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1115 10:02:46.399092  644840 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1115 10:02:46.400673  644840 out.go:252]   - Generating certificates and keys ...
	I1115 10:02:46.400735  644840 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1115 10:02:46.400798  644840 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1115 10:02:46.400851  644840 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1115 10:02:46.400900  644840 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1115 10:02:46.400977  644840 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1115 10:02:46.401060  644840 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1115 10:02:46.401121  644840 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1115 10:02:46.401275  644840 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-034018 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1115 10:02:46.401347  644840 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1115 10:02:46.401533  644840 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-034018 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1115 10:02:46.401622  644840 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1115 10:02:46.401731  644840 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1115 10:02:46.401809  644840 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1115 10:02:46.401857  644840 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1115 10:02:46.401906  644840 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1115 10:02:46.401967  644840 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1115 10:02:46.402045  644840 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1115 10:02:46.402145  644840 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1115 10:02:46.402224  644840 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1115 10:02:46.402348  644840 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1115 10:02:46.402483  644840 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1115 10:02:46.403631  644840 out.go:252]   - Booting up control plane ...
	I1115 10:02:46.403729  644840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1115 10:02:46.403801  644840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1115 10:02:46.403906  644840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1115 10:02:46.404069  644840 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1115 10:02:46.404192  644840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1115 10:02:46.404354  644840 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1115 10:02:46.404510  644840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1115 10:02:46.404564  644840 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1115 10:02:46.404759  644840 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1115 10:02:46.404935  644840 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1115 10:02:46.405011  644840 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000967721s
	I1115 10:02:46.405144  644840 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1115 10:02:46.405256  644840 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1115 10:02:46.405426  644840 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1115 10:02:46.405553  644840 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1115 10:02:46.405659  644840 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.625266743s
	I1115 10:02:46.405721  644840 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.120472428s
	I1115 10:02:46.405824  644840 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001410912s
	I1115 10:02:46.405926  644840 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1115 10:02:46.406089  644840 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1115 10:02:46.406183  644840 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1115 10:02:46.406405  644840 kubeadm.go:319] [mark-control-plane] Marking the node calico-034018 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1115 10:02:46.406467  644840 kubeadm.go:319] [bootstrap-token] Using token: 4kh1q4.55f6y1bx9do26yqz
	I1115 10:02:46.408136  644840 out.go:252]   - Configuring RBAC rules ...
	I1115 10:02:46.408242  644840 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1115 10:02:46.408347  644840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1115 10:02:46.408533  644840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1115 10:02:46.408715  644840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1115 10:02:46.408831  644840 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1115 10:02:46.408928  644840 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1115 10:02:46.409079  644840 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1115 10:02:46.409140  644840 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1115 10:02:46.409204  644840 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1115 10:02:46.409210  644840 kubeadm.go:319] 
	I1115 10:02:46.409256  644840 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1115 10:02:46.409264  644840 kubeadm.go:319] 
	I1115 10:02:46.409333  644840 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1115 10:02:46.409338  644840 kubeadm.go:319] 
	I1115 10:02:46.409358  644840 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1115 10:02:46.409455  644840 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1115 10:02:46.409544  644840 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1115 10:02:46.409556  644840 kubeadm.go:319] 
	I1115 10:02:46.409637  644840 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1115 10:02:46.409643  644840 kubeadm.go:319] 
	I1115 10:02:46.409680  644840 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1115 10:02:46.409685  644840 kubeadm.go:319] 
	I1115 10:02:46.409725  644840 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1115 10:02:46.409835  644840 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1115 10:02:46.409923  644840 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1115 10:02:46.409931  644840 kubeadm.go:319] 
	I1115 10:02:46.410033  644840 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1115 10:02:46.410128  644840 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1115 10:02:46.410136  644840 kubeadm.go:319] 
	I1115 10:02:46.410236  644840 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4kh1q4.55f6y1bx9do26yqz \
	I1115 10:02:46.410378  644840 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac \
	I1115 10:02:46.410509  644840 kubeadm.go:319] 	--control-plane 
	I1115 10:02:46.410537  644840 kubeadm.go:319] 
	I1115 10:02:46.410642  644840 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1115 10:02:46.410650  644840 kubeadm.go:319] 
	I1115 10:02:46.410774  644840 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4kh1q4.55f6y1bx9do26yqz \
	I1115 10:02:46.410905  644840 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:50adbc8488eeaf19803b1348a655fd19ec5fc6ad17f9ff7734300dc095f16eac 
	I1115 10:02:46.410921  644840 cni.go:84] Creating CNI manager for "calico"
	I1115 10:02:46.412139  644840 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1115 10:02:46.413473  644840 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1115 10:02:46.413496  644840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329845 bytes)
	I1115 10:02:46.427915  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1115 10:02:47.255957  644840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1115 10:02:47.256067  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:47.256091  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-034018 minikube.k8s.io/updated_at=2025_11_15T10_02_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=calico-034018 minikube.k8s.io/primary=true
	I1115 10:02:47.266836  644840 ops.go:34] apiserver oom_adj: -16
	I1115 10:02:47.338584  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:45.344615  649367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/client.crt ...
	I1115 10:02:45.344646  649367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/client.crt: {Name:mk0fa7258f6db3366f793dc089f5f4f45a734d68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:45.344863  649367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/client.key ...
	I1115 10:02:45.344883  649367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/client.key: {Name:mk531401ccd9ebacebc9c03c1cb5c6a2fd502c30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:45.344967  649367 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.key.0b9160d7
	I1115 10:02:45.344983  649367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.crt.0b9160d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1115 10:02:45.578266  649367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.crt.0b9160d7 ...
	I1115 10:02:45.578296  649367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.crt.0b9160d7: {Name:mk7e0faa1be1dded3dd5591d8cfeee4c5b392c0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:45.578477  649367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.key.0b9160d7 ...
	I1115 10:02:45.578494  649367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.key.0b9160d7: {Name:mk7d65b281b31bb0ee355fa18804c1fb68818dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:45.578574  649367 certs.go:382] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.crt.0b9160d7 -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.crt
	I1115 10:02:45.578650  649367 certs.go:386] copying /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.key.0b9160d7 -> /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.key
	I1115 10:02:45.578704  649367 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.key
	I1115 10:02:45.578719  649367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.crt with IP's: []
	I1115 10:02:45.674869  649367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.crt ...
	I1115 10:02:45.674899  649367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.crt: {Name:mk09d98934bc3afe4ac441b39ae877b57d810611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:45.675100  649367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.key ...
	I1115 10:02:45.675126  649367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.key: {Name:mke8d3e24e45df297d68d3605421fbbe31dc6a23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:45.675369  649367 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem (1338 bytes)
	W1115 10:02:45.675431  649367 certs.go:480] ignoring /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063_empty.pem, impossibly tiny 0 bytes
	I1115 10:02:45.675448  649367 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca-key.pem (1675 bytes)
	I1115 10:02:45.675480  649367 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/ca.pem (1082 bytes)
	I1115 10:02:45.675522  649367 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/cert.pem (1123 bytes)
	I1115 10:02:45.675556  649367 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/certs/key.pem (1679 bytes)
	I1115 10:02:45.675613  649367 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem (1708 bytes)
	I1115 10:02:45.676180  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1115 10:02:45.695150  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1115 10:02:45.714177  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1115 10:02:45.732547  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1115 10:02:45.750887  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1115 10:02:45.773221  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1115 10:02:45.792874  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1115 10:02:45.814133  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/custom-flannel-034018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1115 10:02:45.835348  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/certs/359063.pem --> /usr/share/ca-certificates/359063.pem (1338 bytes)
	I1115 10:02:45.854847  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/ssl/certs/3590632.pem --> /usr/share/ca-certificates/3590632.pem (1708 bytes)
	I1115 10:02:45.873112  649367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1115 10:02:45.892545  649367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1115 10:02:45.906080  649367 ssh_runner.go:195] Run: openssl version
	I1115 10:02:45.912904  649367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3590632.pem && ln -fs /usr/share/ca-certificates/3590632.pem /etc/ssl/certs/3590632.pem"
	I1115 10:02:45.922929  649367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3590632.pem
	I1115 10:02:45.927009  649367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 15 09:14 /usr/share/ca-certificates/3590632.pem
	I1115 10:02:45.927071  649367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3590632.pem
	I1115 10:02:45.962659  649367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3590632.pem /etc/ssl/certs/3ec20f2e.0"
	I1115 10:02:45.972550  649367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1115 10:02:45.982154  649367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:02:45.986305  649367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:08 /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:02:45.986365  649367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1115 10:02:46.022557  649367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1115 10:02:46.031829  649367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/359063.pem && ln -fs /usr/share/ca-certificates/359063.pem /etc/ssl/certs/359063.pem"
	I1115 10:02:46.040911  649367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/359063.pem
	I1115 10:02:46.045064  649367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 15 09:14 /usr/share/ca-certificates/359063.pem
	I1115 10:02:46.045134  649367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/359063.pem
	I1115 10:02:46.085267  649367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/359063.pem /etc/ssl/certs/51391683.0"
	I1115 10:02:46.095218  649367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1115 10:02:46.099114  649367 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1115 10:02:46.099167  649367 kubeadm.go:401] StartCluster: {Name:custom-flannel-034018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-034018 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 10:02:46.099235  649367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1115 10:02:46.099276  649367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1115 10:02:46.126585  649367 cri.go:89] found id: ""
	I1115 10:02:46.126643  649367 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1115 10:02:46.135099  649367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1115 10:02:46.143424  649367 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1115 10:02:46.143497  649367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1115 10:02:46.151355  649367 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1115 10:02:46.151382  649367 kubeadm.go:158] found existing configuration files:
	
	I1115 10:02:46.151488  649367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1115 10:02:46.160235  649367 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1115 10:02:46.160585  649367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1115 10:02:46.168768  649367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1115 10:02:46.176550  649367 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1115 10:02:46.176609  649367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1115 10:02:46.183948  649367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1115 10:02:46.192350  649367 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1115 10:02:46.192449  649367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1115 10:02:46.199987  649367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1115 10:02:46.207935  649367 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1115 10:02:46.207992  649367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1115 10:02:46.215925  649367 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1115 10:02:46.276551  649367 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1115 10:02:46.340158  649367 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1115 10:02:47.838791  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:48.339161  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:48.839494  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:49.339041  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:49.839160  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:50.339543  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:50.839217  644840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1115 10:02:50.928159  644840 kubeadm.go:1114] duration metric: took 3.672148283s to wait for elevateKubeSystemPrivileges
	I1115 10:02:50.928199  644840 kubeadm.go:403] duration metric: took 17.96199194s to StartCluster
	I1115 10:02:50.928223  644840 settings.go:142] acquiring lock: {Name:mk74a722ac14463a421425ebab3a82973a406239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:50.928300  644840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 10:02:50.930124  644840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-355485/kubeconfig: {Name:mke495bb57b0cce730c495b99bb84a87548e2fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1115 10:02:50.930409  644840 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1115 10:02:50.930560  644840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1115 10:02:50.930762  644840 config.go:182] Loaded profile config "calico-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 10:02:50.930712  644840 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1115 10:02:50.930800  644840 addons.go:70] Setting storage-provisioner=true in profile "calico-034018"
	I1115 10:02:50.930821  644840 addons.go:70] Setting default-storageclass=true in profile "calico-034018"
	I1115 10:02:50.930827  644840 addons.go:239] Setting addon storage-provisioner=true in "calico-034018"
	I1115 10:02:50.930840  644840 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-034018"
	I1115 10:02:50.930862  644840 host.go:66] Checking if "calico-034018" exists ...
	I1115 10:02:50.931192  644840 cli_runner.go:164] Run: docker container inspect calico-034018 --format={{.State.Status}}
	I1115 10:02:50.931361  644840 cli_runner.go:164] Run: docker container inspect calico-034018 --format={{.State.Status}}
	I1115 10:02:50.936648  644840 out.go:179] * Verifying Kubernetes components...
	I1115 10:02:50.938205  644840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1115 10:02:50.957935  644840 addons.go:239] Setting addon default-storageclass=true in "calico-034018"
	I1115 10:02:50.957991  644840 host.go:66] Checking if "calico-034018" exists ...
	I1115 10:02:50.958485  644840 cli_runner.go:164] Run: docker container inspect calico-034018 --format={{.State.Status}}
	I1115 10:02:50.960714  644840 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1115 10:02:50.961834  644840 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:02:50.961878  644840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1115 10:02:50.961934  644840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034018
	I1115 10:02:50.993443  644840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/calico-034018/id_rsa Username:docker}
	I1115 10:02:50.996985  644840 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1115 10:02:50.997010  644840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1115 10:02:50.997070  644840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-034018
	I1115 10:02:51.026040  644840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/calico-034018/id_rsa Username:docker}
	I1115 10:02:51.055238  644840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1115 10:02:51.111782  644840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1115 10:02:51.129129  644840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1115 10:02:51.171061  644840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1115 10:02:51.309151  644840 node_ready.go:35] waiting up to 15m0s for node "calico-034018" to be "Ready" ...
	I1115 10:02:51.309501  644840 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1115 10:02:51.542072  644840 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1115 10:02:51.543267  644840 addons.go:515] duration metric: took 612.538136ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1115 10:02:51.813891  644840 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-034018" context rescaled to 1 replicas
	
	
	==> CRI-O <==
	Nov 15 10:02:38 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:38.120479974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:38 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:38.12070068Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0a20b7fd253df532d24ed08bc6153ff27ac1de96fcca04f6ef0a92bd8561314f/merged/etc/passwd: no such file or directory"
	Nov 15 10:02:38 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:38.120743445Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0a20b7fd253df532d24ed08bc6153ff27ac1de96fcca04f6ef0a92bd8561314f/merged/etc/group: no such file or directory"
	Nov 15 10:02:38 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:38.121066925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 15 10:02:38 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:38.167356385Z" level=info msg="Created container 41c0918e1f139e5b9c79fee38e2fd7c53a8fdec337292205b4d7fa1e7985ddb2: kube-system/storage-provisioner/storage-provisioner" id=cf58c300-507a-4fc7-af69-f83d9b9640d7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 15 10:02:38 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:38.16816217Z" level=info msg="Starting container: 41c0918e1f139e5b9c79fee38e2fd7c53a8fdec337292205b4d7fa1e7985ddb2" id=a40f0ebb-2a58-4fc5-a714-6b36a635a995 name=/runtime.v1.RuntimeService/StartContainer
	Nov 15 10:02:38 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:38.17023239Z" level=info msg="Started container" PID=1705 containerID=41c0918e1f139e5b9c79fee38e2fd7c53a8fdec337292205b4d7fa1e7985ddb2 description=kube-system/storage-provisioner/storage-provisioner id=a40f0ebb-2a58-4fc5-a714-6b36a635a995 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6f78bc59cf594a378b2c405b0bb325c4617f2b7132ee4e5d3415316f4e5feaee
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.967869089Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.972432682Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.972462814Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.972500711Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.975949035Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.975973189Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.975995129Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.979762848Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.979787962Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.979806865Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.983074102Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.983096581Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.983111534Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.98634527Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.986365581Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.986383338Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.989928266Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 15 10:02:47 default-k8s-diff-port-679865 crio[569]: time="2025-11-15T10:02:47.989953736Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	41c0918e1f139       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   6f78bc59cf594       storage-provisioner                                    kube-system
	d905bb086e133       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   ea907d0834fa1       dashboard-metrics-scraper-6ffb444bf9-nq268             kubernetes-dashboard
	a12019be5efb2       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   246fdac6aa1b0       kubernetes-dashboard-855c9754f9-24grr                  kubernetes-dashboard
	32fe67745ed10       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   c76b80ead2463       kube-proxy-qhrzp                                       kube-system
	b641794e62bea       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   fa2507d0014a8       kindnet-7j4zt                                          kube-system
	d8915a281afaa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   a86d2eba0bcc6       coredns-66bc5c9577-wknnh                               kube-system
	9724319435f1c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   7be1c140042a1       busybox                                                default
	b0faf6ec7f64c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   6f78bc59cf594       storage-provisioner                                    kube-system
	97ee6a21580e9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   85cbe011c737c       etcd-default-k8s-diff-port-679865                      kube-system
	35c85b6acec1d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   d85d1fd2da2cc       kube-controller-manager-default-k8s-diff-port-679865   kube-system
	0d7cda73760c1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   b461a6a50d523       kube-apiserver-default-k8s-diff-port-679865            kube-system
	9282ef22a41e4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   66e2e21dab3c8       kube-scheduler-default-k8s-diff-port-679865            kube-system
	
	
	==> coredns [d8915a281afaa6736017a3530f1781a5398760b8d656d748a1d9e9da3d690f31] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41651 - 34843 "HINFO IN 3963982386183452308.1339123970555344780. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049673408s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-679865
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-679865
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
	                    minikube.k8s.io/name=default-k8s-diff-port-679865
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T10_01_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 10:01:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-679865
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 10:02:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 10:02:37 +0000   Sat, 15 Nov 2025 10:01:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 10:02:37 +0000   Sat, 15 Nov 2025 10:01:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 10:02:37 +0000   Sat, 15 Nov 2025 10:01:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 10:02:37 +0000   Sat, 15 Nov 2025 10:01:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-679865
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                ba37645b-1855-4935-9368-1380eb8c0d66
	  Boot ID:                    0804eed8-f591-4232-9f72-e393b8ab1714
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-wknnh                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-default-k8s-diff-port-679865                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-7j4zt                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-default-k8s-diff-port-679865             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-679865    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-qhrzp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-default-k8s-diff-port-679865             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nq268              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-24grr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node default-k8s-diff-port-679865 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node default-k8s-diff-port-679865 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node default-k8s-diff-port-679865 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node default-k8s-diff-port-679865 event: Registered Node default-k8s-diff-port-679865 in Controller
	  Normal  NodeReady                97s                kubelet          Node default-k8s-diff-port-679865 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)  kubelet          Node default-k8s-diff-port-679865 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)  kubelet          Node default-k8s-diff-port-679865 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 55s)  kubelet          Node default-k8s-diff-port-679865 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node default-k8s-diff-port-679865 event: Registered Node default-k8s-diff-port-679865 in Controller
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff d6 e1 a0 05 d7 00 08 06
	[  +0.000364] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 da da 36 e2 04 08 06
	[Nov15 09:11] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.055605] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023958] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +1.023892] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +2.047871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +4.031655] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[  +8.383320] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[ +16.382757] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	[Nov15 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 20 1b 7f d2 43 0a c2 b0 24 ea 74 08 00
	
	
	==> etcd [97ee6a21580e9b7957b3dcf359e11e5b217e1a40e090ac2ee838797b9fdce0cc] <==
	{"level":"warn","ts":"2025-11-15T10:02:05.683975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.692588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.699332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.706533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.717458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.728142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.736230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.743614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.751606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.758910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.765748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.782505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.790150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.797371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:05.845363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T10:02:27.119333Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.604441ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597075022744865 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-679865\" mod_revision:584 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-679865\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-679865\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:02:27.120121Z","caller":"traceutil/trace.go:172","msg":"trace[1580484632] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"133.204579ms","start":"2025-11-15T10:02:26.986883Z","end":"2025-11-15T10:02:27.120087Z","steps":["trace[1580484632] 'process raft request'  (duration: 17.344099ms)","trace[1580484632] 'compare'  (duration: 114.30447ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:02:39.221149Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.366213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:1 size:4635"}
	{"level":"info","ts":"2025-11-15T10:02:39.221225Z","caller":"traceutil/trace.go:172","msg":"trace[36722220] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:624; }","duration":"102.486026ms","start":"2025-11-15T10:02:39.118724Z","end":"2025-11-15T10:02:39.221210Z","steps":["trace[36722220] 'agreement among raft nodes before linearized reading'  (duration: 96.492893ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:02:39.221789Z","caller":"traceutil/trace.go:172","msg":"trace[1315234641] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"125.364569ms","start":"2025-11-15T10:02:39.096409Z","end":"2025-11-15T10:02:39.221773Z","steps":["trace[1315234641] 'process raft request'  (duration: 118.801853ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-15T10:02:39.376265Z","caller":"traceutil/trace.go:172","msg":"trace[1681869455] linearizableReadLoop","detail":"{readStateIndex:661; appliedIndex:661; }","duration":"145.369376ms","start":"2025-11-15T10:02:39.230868Z","end":"2025-11-15T10:02:39.376238Z","steps":["trace[1681869455] 'read index received'  (duration: 145.347008ms)","trace[1681869455] 'applied index is now lower than readState.Index'  (duration: 9.78µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:02:39.505078Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"274.193797ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-679865\" limit:1 ","response":"range_response_count:1 size:5995"}
	{"level":"info","ts":"2025-11-15T10:02:39.505148Z","caller":"traceutil/trace.go:172","msg":"trace[1616046677] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-679865; range_end:; response_count:1; response_revision:625; }","duration":"274.268265ms","start":"2025-11-15T10:02:39.230860Z","end":"2025-11-15T10:02:39.505128Z","steps":["trace[1616046677] 'agreement among raft nodes before linearized reading'  (duration: 145.482038ms)","trace[1616046677] 'range keys from in-memory index tree'  (duration: 128.635376ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-15T10:02:39.505445Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.987411ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597075022744994 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:618 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:4376 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-15T10:02:39.505513Z","caller":"traceutil/trace.go:172","msg":"trace[39070624] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"277.212266ms","start":"2025-11-15T10:02:39.228287Z","end":"2025-11-15T10:02:39.505499Z","steps":["trace[39070624] 'process raft request'  (duration: 147.976701ms)","trace[39070624] 'compare'  (duration: 128.640577ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:02:58 up  1:45,  0 user,  load average: 5.81, 3.85, 2.35
	Linux default-k8s-diff-port-679865 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b641794e62bea8b62572b411355c7f914cf43c7562c880b8c4edb09ed1669019] <==
	I1115 10:02:07.760810       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 10:02:07.761075       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1115 10:02:07.761298       1 main.go:148] setting mtu 1500 for CNI 
	I1115 10:02:07.761321       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 10:02:07.761350       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T10:02:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 10:02:07.963890       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 10:02:07.963957       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 10:02:07.963969       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 10:02:08.055843       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1115 10:02:38.056111       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1115 10:02:38.056115       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1115 10:02:38.056117       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1115 10:02:38.056278       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1115 10:02:39.164186       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 10:02:39.164236       1 metrics.go:72] Registering metrics
	I1115 10:02:39.164323       1 controller.go:711] "Syncing nftables rules"
	I1115 10:02:47.967511       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:02:47.967568       1 main.go:301] handling current node
	I1115 10:02:57.973025       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1115 10:02:57.973057       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0d7cda73760c10da27ca408e9cf406330d687485abfb473948a9af8b77257d98] <==
	I1115 10:02:06.452626       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1115 10:02:06.452910       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1115 10:02:06.453376       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1115 10:02:06.469852       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1115 10:02:06.483077       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1115 10:02:06.483141       1 aggregator.go:171] initial CRD sync complete...
	I1115 10:02:06.483153       1 autoregister_controller.go:144] Starting autoregister controller
	I1115 10:02:06.483160       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1115 10:02:06.483166       1 cache.go:39] Caches are synced for autoregister controller
	I1115 10:02:06.544688       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1115 10:02:06.544820       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1115 10:02:06.548806       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1115 10:02:06.551945       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 10:02:06.867918       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 10:02:06.910724       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 10:02:06.934198       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 10:02:06.943100       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 10:02:06.955247       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 10:02:06.993369       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.73.203"}
	I1115 10:02:07.013228       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.56.218"}
	I1115 10:02:07.353437       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 10:02:09.823747       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 10:02:10.222713       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 10:02:10.373775       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 10:02:10.373775       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [35c85b6acec1d4f4a155901044f09a0aad4f8ee6965e9a163bb790680c84c184] <==
	I1115 10:02:09.799478       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 10:02:09.801551       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1115 10:02:09.818986       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1115 10:02:09.820193       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1115 10:02:09.820220       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1115 10:02:09.820248       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1115 10:02:09.820251       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1115 10:02:09.820262       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1115 10:02:09.820295       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1115 10:02:09.820371       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1115 10:02:09.820433       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1115 10:02:09.820245       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 10:02:09.822221       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1115 10:02:09.825456       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1115 10:02:09.827803       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:02:09.827830       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1115 10:02:09.828952       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 10:02:09.829878       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 10:02:09.830369       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 10:02:09.831525       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 10:02:09.831647       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 10:02:09.848961       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:02:09.854879       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1115 10:02:09.854898       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1115 10:02:09.854907       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [32fe67745ed10f95d2f17825b82c788a9ff22653f0f715cfcd3760aa162dd40a] <==
	I1115 10:02:07.674080       1 server_linux.go:53] "Using iptables proxy"
	I1115 10:02:07.743212       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 10:02:07.843952       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 10:02:07.844003       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1115 10:02:07.844135       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 10:02:07.868801       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 10:02:07.868863       1 server_linux.go:132] "Using iptables Proxier"
	I1115 10:02:07.873943       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 10:02:07.874273       1 server.go:527] "Version info" version="v1.34.1"
	I1115 10:02:07.874289       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:02:07.875636       1 config.go:200] "Starting service config controller"
	I1115 10:02:07.875656       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 10:02:07.875782       1 config.go:309] "Starting node config controller"
	I1115 10:02:07.875795       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 10:02:07.876175       1 config.go:106] "Starting endpoint slice config controller"
	I1115 10:02:07.876184       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 10:02:07.876205       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 10:02:07.876210       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 10:02:07.975867       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 10:02:07.975881       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 10:02:07.976568       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 10:02:07.976586       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9282ef22a41e45b12389c8dd7333237e091c1de52b31375f6caae152743253eb] <==
	I1115 10:02:06.441517       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 10:02:06.444791       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:02:06.444889       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 10:02:06.447432       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1115 10:02:06.447500       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1115 10:02:06.447790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 10:02:06.454742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 10:02:06.454929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 10:02:06.454995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 10:02:06.458891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 10:02:06.458995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 10:02:06.459127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 10:02:06.460493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 10:02:06.460638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 10:02:06.460696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 10:02:06.461049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 10:02:06.461171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 10:02:06.461263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 10:02:06.461279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 10:02:06.461311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 10:02:06.461572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 10:02:06.461693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 10:02:06.461792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 10:02:06.463023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1115 10:02:08.045717       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 15 10:02:07 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:07.152353     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac94ddc3-4b28-4ca8-a5d5-877120496ee0-xtables-lock\") pod \"kube-proxy-qhrzp\" (UID: \"ac94ddc3-4b28-4ca8-a5d5-877120496ee0\") " pod="kube-system/kube-proxy-qhrzp"
	Nov 15 10:02:10 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:10.470318     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jm8j\" (UniqueName: \"kubernetes.io/projected/08ef7e61-370b-4274-ae6e-e14b1a7bcfb8-kube-api-access-5jm8j\") pod \"dashboard-metrics-scraper-6ffb444bf9-nq268\" (UID: \"08ef7e61-370b-4274-ae6e-e14b1a7bcfb8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nq268"
	Nov 15 10:02:10 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:10.470357     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a1d81f82-7521-4a40-81a2-df544fe4a3a6-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-24grr\" (UID: \"a1d81f82-7521-4a40-81a2-df544fe4a3a6\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-24grr"
	Nov 15 10:02:10 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:10.470373     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/08ef7e61-370b-4274-ae6e-e14b1a7bcfb8-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-nq268\" (UID: \"08ef7e61-370b-4274-ae6e-e14b1a7bcfb8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nq268"
	Nov 15 10:02:10 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:10.470418     734 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7z4d\" (UniqueName: \"kubernetes.io/projected/a1d81f82-7521-4a40-81a2-df544fe4a3a6-kube-api-access-m7z4d\") pod \"kubernetes-dashboard-855c9754f9-24grr\" (UID: \"a1d81f82-7521-4a40-81a2-df544fe4a3a6\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-24grr"
	Nov 15 10:02:16 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:16.069414     734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-24grr" podStartSLOduration=1.397210005 podStartE2EDuration="6.069375515s" podCreationTimestamp="2025-11-15 10:02:10 +0000 UTC" firstStartedPulling="2025-11-15 10:02:10.773464349 +0000 UTC m=+6.905310137" lastFinishedPulling="2025-11-15 10:02:15.445629858 +0000 UTC m=+11.577475647" observedRunningTime="2025-11-15 10:02:16.068619143 +0000 UTC m=+12.200464941" watchObservedRunningTime="2025-11-15 10:02:16.069375515 +0000 UTC m=+12.201221318"
	Nov 15 10:02:19 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:19.053098     734 scope.go:117] "RemoveContainer" containerID="737dddeaa1527290ba65166ab35052f8f79681bfe63342aa5dcf2c4eb4d80576"
	Nov 15 10:02:20 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:20.057844     734 scope.go:117] "RemoveContainer" containerID="737dddeaa1527290ba65166ab35052f8f79681bfe63342aa5dcf2c4eb4d80576"
	Nov 15 10:02:20 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:20.057988     734 scope.go:117] "RemoveContainer" containerID="1521693d8618dc59d4ba30c241ef825b975b4d4c9091bf109fd2e77b539ee23c"
	Nov 15 10:02:20 default-k8s-diff-port-679865 kubelet[734]: E1115 10:02:20.058162     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nq268_kubernetes-dashboard(08ef7e61-370b-4274-ae6e-e14b1a7bcfb8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nq268" podUID="08ef7e61-370b-4274-ae6e-e14b1a7bcfb8"
	Nov 15 10:02:21 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:21.062373     734 scope.go:117] "RemoveContainer" containerID="1521693d8618dc59d4ba30c241ef825b975b4d4c9091bf109fd2e77b539ee23c"
	Nov 15 10:02:21 default-k8s-diff-port-679865 kubelet[734]: E1115 10:02:21.062607     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nq268_kubernetes-dashboard(08ef7e61-370b-4274-ae6e-e14b1a7bcfb8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nq268" podUID="08ef7e61-370b-4274-ae6e-e14b1a7bcfb8"
	Nov 15 10:02:23 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:23.488116     734 scope.go:117] "RemoveContainer" containerID="1521693d8618dc59d4ba30c241ef825b975b4d4c9091bf109fd2e77b539ee23c"
	Nov 15 10:02:23 default-k8s-diff-port-679865 kubelet[734]: E1115 10:02:23.488411     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nq268_kubernetes-dashboard(08ef7e61-370b-4274-ae6e-e14b1a7bcfb8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nq268" podUID="08ef7e61-370b-4274-ae6e-e14b1a7bcfb8"
	Nov 15 10:02:35 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:35.977197     734 scope.go:117] "RemoveContainer" containerID="1521693d8618dc59d4ba30c241ef825b975b4d4c9091bf109fd2e77b539ee23c"
	Nov 15 10:02:36 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:36.103100     734 scope.go:117] "RemoveContainer" containerID="1521693d8618dc59d4ba30c241ef825b975b4d4c9091bf109fd2e77b539ee23c"
	Nov 15 10:02:36 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:36.103352     734 scope.go:117] "RemoveContainer" containerID="d905bb086e1338902f1ad7c01443492f6ff71442781f3952bf847f849778f855"
	Nov 15 10:02:36 default-k8s-diff-port-679865 kubelet[734]: E1115 10:02:36.103658     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nq268_kubernetes-dashboard(08ef7e61-370b-4274-ae6e-e14b1a7bcfb8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nq268" podUID="08ef7e61-370b-4274-ae6e-e14b1a7bcfb8"
	Nov 15 10:02:38 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:38.112454     734 scope.go:117] "RemoveContainer" containerID="b0faf6ec7f64ca9800ab743771a847d1b3a7eb0f8db4a21455d9a12122d0372d"
	Nov 15 10:02:43 default-k8s-diff-port-679865 kubelet[734]: I1115 10:02:43.487929     734 scope.go:117] "RemoveContainer" containerID="d905bb086e1338902f1ad7c01443492f6ff71442781f3952bf847f849778f855"
	Nov 15 10:02:43 default-k8s-diff-port-679865 kubelet[734]: E1115 10:02:43.488103     734 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nq268_kubernetes-dashboard(08ef7e61-370b-4274-ae6e-e14b1a7bcfb8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nq268" podUID="08ef7e61-370b-4274-ae6e-e14b1a7bcfb8"
	Nov 15 10:02:52 default-k8s-diff-port-679865 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 15 10:02:52 default-k8s-diff-port-679865 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 15 10:02:52 default-k8s-diff-port-679865 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 15 10:02:52 default-k8s-diff-port-679865 systemd[1]: kubelet.service: Consumed 1.693s CPU time.
	
	
	==> kubernetes-dashboard [a12019be5efb212443fa3cd0d63f001ce894d1d08de1f00d096804524401e2cf] <==
	2025/11/15 10:02:15 Starting overwatch
	2025/11/15 10:02:15 Using namespace: kubernetes-dashboard
	2025/11/15 10:02:15 Using in-cluster config to connect to apiserver
	2025/11/15 10:02:15 Using secret token for csrf signing
	2025/11/15 10:02:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/15 10:02:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/15 10:02:15 Successful initial request to the apiserver, version: v1.34.1
	2025/11/15 10:02:15 Generating JWE encryption key
	2025/11/15 10:02:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/15 10:02:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/15 10:02:15 Initializing JWE encryption key from synchronized object
	2025/11/15 10:02:15 Creating in-cluster Sidecar client
	2025/11/15 10:02:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/15 10:02:15 Serving insecurely on HTTP port: 9090
	2025/11/15 10:02:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [41c0918e1f139e5b9c79fee38e2fd7c53a8fdec337292205b4d7fa1e7985ddb2] <==
	I1115 10:02:38.184903       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1115 10:02:38.193385       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1115 10:02:38.193469       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1115 10:02:38.195866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:41.650569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:45.911411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:49.510080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:52.564204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:55.587007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:55.591322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:02:55.591510       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1115 10:02:55.591589       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"30a7c389-2335-4677-b5bc-b5dcc414ee67", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-679865_9ccb1e71-9c39-4f75-9ea0-3e954bb544e9 became leader
	I1115 10:02:55.591668       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-679865_9ccb1e71-9c39-4f75-9ea0-3e954bb544e9!
	W1115 10:02:55.594080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:55.597488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1115 10:02:55.692566       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-679865_9ccb1e71-9c39-4f75-9ea0-3e954bb544e9!
	W1115 10:02:57.602249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 10:02:57.607695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b0faf6ec7f64ca9800ab743771a847d1b3a7eb0f8db4a21455d9a12122d0372d] <==
	I1115 10:02:07.337889       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 10:02:37.340868       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-679865 -n default-k8s-diff-port-679865
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-679865 -n default-k8s-diff-port-679865: exit status 2 (397.282797ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-679865 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.97s)

                                                
                                    

Test pass (262/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.14
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 11
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.43
21 TestBinaryMirror 0.85
22 TestOffline 56.18
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 135.52
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 10.44
48 TestAddons/StoppedEnableDisable 18.55
49 TestCertOptions 27.2
50 TestCertExpiration 213.97
52 TestForceSystemdFlag 25.76
53 TestForceSystemdEnv 26.23
58 TestErrorSpam/setup 19.61
59 TestErrorSpam/start 0.68
60 TestErrorSpam/status 0.95
61 TestErrorSpam/pause 5.94
62 TestErrorSpam/unpause 6.04
63 TestErrorSpam/stop 2.59
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 37.5
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.26
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.9
75 TestFunctional/serial/CacheCmd/cache/add_local 1.85
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.6
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 42.47
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.25
86 TestFunctional/serial/LogsFileCmd 1.27
87 TestFunctional/serial/InvalidService 4.01
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 6.88
91 TestFunctional/parallel/DryRun 0.4
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 0.95
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 26.26
101 TestFunctional/parallel/SSHCmd 0.54
102 TestFunctional/parallel/CpCmd 1.7
103 TestFunctional/parallel/MySQL 18.39
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 1.67
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
113 TestFunctional/parallel/License 0.4
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.2
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/Version/short 0.06
127 TestFunctional/parallel/Version/components 0.48
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
132 TestFunctional/parallel/ImageCommands/ImageBuild 6.41
133 TestFunctional/parallel/ImageCommands/Setup 1.72
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
142 TestFunctional/parallel/ProfileCmd/profile_list 0.39
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
144 TestFunctional/parallel/MountCmd/any-port 7.9
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
148 TestFunctional/parallel/MountCmd/specific-port 1.78
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.66
150 TestFunctional/parallel/ServiceCmd/List 1.71
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.72
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 147.6
163 TestMultiControlPlane/serial/DeployApp 5.42
164 TestMultiControlPlane/serial/PingHostFromPods 1.06
165 TestMultiControlPlane/serial/AddWorkerNode 54.49
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
168 TestMultiControlPlane/serial/CopyFile 17.52
169 TestMultiControlPlane/serial/StopSecondaryNode 13.37
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.11
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.61
176 TestMultiControlPlane/serial/StopCluster 43.08
177 TestMultiControlPlane/serial/RestartCluster 72.98
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
179 TestMultiControlPlane/serial/AddSecondaryNode 35.09
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
185 TestJSONOutput/start/Command 42.65
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.22
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 39.64
211 TestKicCustomNetwork/use_default_bridge_network 24.21
212 TestKicExistingNetwork 23.94
213 TestKicCustomSubnet 24.54
214 TestKicStaticIP 28.87
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 48.68
219 TestMountStart/serial/StartWithMountFirst 4.88
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 4.83
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.69
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.26
226 TestMountStart/serial/RestartStopped 8.22
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 92.85
231 TestMultiNode/serial/DeployApp2Nodes 4.32
232 TestMultiNode/serial/PingHostFrom2Pods 0.74
233 TestMultiNode/serial/AddNode 54.23
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.65
236 TestMultiNode/serial/CopyFile 9.83
237 TestMultiNode/serial/StopNode 2.26
238 TestMultiNode/serial/StartAfterStop 7.3
239 TestMultiNode/serial/RestartKeepsNodes 82.21
240 TestMultiNode/serial/DeleteNode 5.25
241 TestMultiNode/serial/StopMultiNode 28.62
242 TestMultiNode/serial/RestartMultiNode 26.99
243 TestMultiNode/serial/ValidateNameConflict 24.75
248 TestPreload 115.45
250 TestScheduledStopUnix 96.72
253 TestInsufficientStorage 12.36
254 TestRunningBinaryUpgrade 56.91
256 TestKubernetesUpgrade 314.95
257 TestMissingContainerUpgrade 118.65
258 TestStoppedBinaryUpgrade/Setup 2.71
259 TestStoppedBinaryUpgrade/Upgrade 89.31
260 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
269 TestPause/serial/Start 46.04
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
272 TestNoKubernetes/serial/StartWithK8s 26.49
280 TestNetworkPlugins/group/false 3.9
284 TestNoKubernetes/serial/StartWithStopK8s 25.15
285 TestPause/serial/SecondStartNoReconfiguration 5.94
287 TestNoKubernetes/serial/Start 4.16
288 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
289 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
290 TestNoKubernetes/serial/ProfileList 16.29
291 TestNoKubernetes/serial/Stop 2.14
292 TestNoKubernetes/serial/StartNoArgs 7.25
293 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
295 TestStartStop/group/old-k8s-version/serial/FirstStart 48.64
297 TestStartStop/group/no-preload/serial/FirstStart 53.27
298 TestStartStop/group/old-k8s-version/serial/DeployApp 10.24
300 TestStartStop/group/old-k8s-version/serial/Stop 16.07
301 TestStartStop/group/no-preload/serial/DeployApp 8.22
303 TestStartStop/group/no-preload/serial/Stop 16.29
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
305 TestStartStop/group/old-k8s-version/serial/SecondStart 46.46
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
307 TestStartStop/group/no-preload/serial/SecondStart 43.93
309 TestStartStop/group/embed-certs/serial/FirstStart 40.71
310 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
317 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
321 TestStartStop/group/newest-cni/serial/FirstStart 31.36
322 TestStartStop/group/embed-certs/serial/DeployApp 9.28
324 TestStartStop/group/embed-certs/serial/Stop 16.87
325 TestNetworkPlugins/group/auto/Start 40.27
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
327 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
328 TestStartStop/group/embed-certs/serial/SecondStart 49.69
329 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/Stop 8.02
333 TestStartStop/group/default-k8s-diff-port/serial/Stop 19.05
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
335 TestStartStop/group/newest-cni/serial/SecondStart 12.95
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
340 TestNetworkPlugins/group/auto/KubeletFlags 0.32
341 TestNetworkPlugins/group/auto/NetCatPod 8.19
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
343 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.67
344 TestNetworkPlugins/group/kindnet/Start 44.17
345 TestNetworkPlugins/group/auto/DNS 0.12
346 TestNetworkPlugins/group/auto/Localhost 0.1
347 TestNetworkPlugins/group/auto/HairPin 0.12
348 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
349 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.15
350 TestNetworkPlugins/group/calico/Start 53
351 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.37
353 TestNetworkPlugins/group/custom-flannel/Start 55.29
354 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
358 TestNetworkPlugins/group/kindnet/NetCatPod 8.19
359 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
361 TestNetworkPlugins/group/kindnet/DNS 0.14
362 TestNetworkPlugins/group/kindnet/Localhost 0.1
363 TestNetworkPlugins/group/kindnet/HairPin 0.14
364 TestNetworkPlugins/group/enable-default-cni/Start 61.66
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/flannel/Start 48.44
367 TestNetworkPlugins/group/calico/KubeletFlags 0.34
368 TestNetworkPlugins/group/calico/NetCatPod 10.26
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
371 TestNetworkPlugins/group/calico/DNS 0.17
372 TestNetworkPlugins/group/calico/Localhost 0.15
373 TestNetworkPlugins/group/calico/HairPin 0.13
374 TestNetworkPlugins/group/custom-flannel/DNS 0.13
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
377 TestNetworkPlugins/group/bridge/Start 32.96
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.18
380 TestNetworkPlugins/group/flannel/ControllerPod 6.01
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
382 TestNetworkPlugins/group/flannel/NetCatPod 9.17
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
386 TestNetworkPlugins/group/flannel/DNS 0.12
387 TestNetworkPlugins/group/flannel/Localhost 0.1
388 TestNetworkPlugins/group/flannel/HairPin 0.1
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
390 TestNetworkPlugins/group/bridge/NetCatPod 9.2
391 TestNetworkPlugins/group/bridge/DNS 0.12
392 TestNetworkPlugins/group/bridge/Localhost 0.1
393 TestNetworkPlugins/group/bridge/HairPin 0.1
x
+
TestDownloadOnly/v1.28.0/json-events (12.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-934087 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-934087 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.13797254s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1115 09:08:16.877601  359063 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1115 09:08:16.877704  359063 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-934087
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-934087: exit status 85 (76.247982ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-934087 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-934087 │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:08:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:08:04.793719  359075 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:08:04.793822  359075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:08:04.793828  359075 out.go:374] Setting ErrFile to fd 2...
	I1115 09:08:04.793832  359075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:08:04.794077  359075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	W1115 09:08:04.794203  359075 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21895-355485/.minikube/config/config.json: open /home/jenkins/minikube-integration/21895-355485/.minikube/config/config.json: no such file or directory
	I1115 09:08:04.794705  359075 out.go:368] Setting JSON to true
	I1115 09:08:04.795697  359075 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3026,"bootTime":1763194659,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:08:04.795789  359075 start.go:143] virtualization: kvm guest
	I1115 09:08:04.797860  359075 out.go:99] [download-only-934087] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:08:04.798024  359075 notify.go:221] Checking for updates...
	W1115 09:08:04.798029  359075 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball: no such file or directory
	I1115 09:08:04.799164  359075 out.go:171] MINIKUBE_LOCATION=21895
	I1115 09:08:04.800630  359075 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:08:04.801894  359075 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:08:04.806036  359075 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 09:08:04.807131  359075 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1115 09:08:04.809176  359075 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1115 09:08:04.809585  359075 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:08:04.832300  359075 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:08:04.832461  359075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:08:04.890040  359075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-15 09:08:04.880102584 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:08:04.890156  359075 docker.go:319] overlay module found
	I1115 09:08:04.891719  359075 out.go:99] Using the docker driver based on user configuration
	I1115 09:08:04.891763  359075 start.go:309] selected driver: docker
	I1115 09:08:04.891773  359075 start.go:930] validating driver "docker" against <nil>
	I1115 09:08:04.891863  359075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:08:04.948578  359075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-15 09:08:04.939213681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:08:04.948735  359075 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:08:04.949239  359075 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1115 09:08:04.949415  359075 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1115 09:08:04.951349  359075 out.go:171] Using Docker driver with root privileges
	I1115 09:08:04.952510  359075 cni.go:84] Creating CNI manager for ""
	I1115 09:08:04.952576  359075 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:08:04.952588  359075 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 09:08:04.952657  359075 start.go:353] cluster config:
	{Name:download-only-934087 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-934087 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:08:04.954004  359075 out.go:99] Starting "download-only-934087" primary control-plane node in "download-only-934087" cluster
	I1115 09:08:04.954024  359075 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:08:04.955231  359075 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:08:04.955270  359075 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 09:08:04.955304  359075 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:08:04.972585  359075 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 09:08:04.972806  359075 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1115 09:08:04.972915  359075 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 09:08:05.286126  359075 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1115 09:08:05.286176  359075 cache.go:65] Caching tarball of preloaded images
	I1115 09:08:05.286376  359075 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1115 09:08:05.288276  359075 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1115 09:08:05.288311  359075 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1115 09:08:05.389320  359075 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1115 09:08:05.389470  359075 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1115 09:08:09.345069  359075 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	
	
	* The control-plane node download-only-934087 host does not exist
	  To start a cluster, run: "minikube start -p download-only-934087"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-934087
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-369450 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-369450 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.00341766s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1115 09:08:28.347231  359063 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1115 09:08:28.347281  359063 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-369450
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-369450: exit status 85 (75.362638ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-934087 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-934087 │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │ 15 Nov 25 09:08 UTC │
	│ delete  │ -p download-only-934087                                                                                                                                                   │ download-only-934087 │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │ 15 Nov 25 09:08 UTC │
	│ start   │ -o=json --download-only -p download-only-369450 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-369450 │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:08:17
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:08:17.399333  359450 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:08:17.399484  359450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:08:17.399490  359450 out.go:374] Setting ErrFile to fd 2...
	I1115 09:08:17.399495  359450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:08:17.399735  359450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:08:17.400205  359450 out.go:368] Setting JSON to true
	I1115 09:08:17.401200  359450 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3038,"bootTime":1763194659,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:08:17.401295  359450 start.go:143] virtualization: kvm guest
	I1115 09:08:17.403182  359450 out.go:99] [download-only-369450] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:08:17.403360  359450 notify.go:221] Checking for updates...
	I1115 09:08:17.404587  359450 out.go:171] MINIKUBE_LOCATION=21895
	I1115 09:08:17.405907  359450 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:08:17.407284  359450 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:08:17.408409  359450 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 09:08:17.409613  359450 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1115 09:08:17.411752  359450 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1115 09:08:17.412028  359450 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:08:17.435478  359450 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:08:17.435602  359450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:08:17.492068  359450 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-15 09:08:17.482381827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:08:17.492170  359450 docker.go:319] overlay module found
	I1115 09:08:17.493755  359450 out.go:99] Using the docker driver based on user configuration
	I1115 09:08:17.493796  359450 start.go:309] selected driver: docker
	I1115 09:08:17.493805  359450 start.go:930] validating driver "docker" against <nil>
	I1115 09:08:17.493910  359450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:08:17.554852  359450 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-15 09:08:17.545247832 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:08:17.555031  359450 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:08:17.555620  359450 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1115 09:08:17.555830  359450 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1115 09:08:17.557478  359450 out.go:171] Using Docker driver with root privileges
	I1115 09:08:17.558466  359450 cni.go:84] Creating CNI manager for ""
	I1115 09:08:17.558529  359450 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1115 09:08:17.558540  359450 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1115 09:08:17.558609  359450 start.go:353] cluster config:
	{Name:download-only-369450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-369450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:08:17.559798  359450 out.go:99] Starting "download-only-369450" primary control-plane node in "download-only-369450" cluster
	I1115 09:08:17.559818  359450 cache.go:134] Beginning downloading kic base image for docker with crio
	I1115 09:08:17.560831  359450 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1115 09:08:17.560858  359450 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:08:17.560895  359450 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1115 09:08:17.578237  359450 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1115 09:08:17.578416  359450 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1115 09:08:17.578439  359450 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1115 09:08:17.578448  359450 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1115 09:08:17.578456  359450 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1115 09:08:17.895388  359450 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1115 09:08:17.895469  359450 cache.go:65] Caching tarball of preloaded images
	I1115 09:08:17.895733  359450 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1115 09:08:17.897598  359450 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1115 09:08:17.897632  359450 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1115 09:08:17.997696  359450 preload.go:295] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1115 09:08:17.997750  359450 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21895-355485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-369450 host does not exist
	  To start a cluster, run: "minikube start -p download-only-369450"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-369450
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-876877 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-876877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-876877
--- PASS: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestBinaryMirror (0.85s)

                                                
                                                
=== RUN   TestBinaryMirror
I1115 09:08:29.543313  359063 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-730212 --alsologtostderr --binary-mirror http://127.0.0.1:42111 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-730212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-730212
--- PASS: TestBinaryMirror (0.85s)

                                                
                                    
x
+
TestOffline (56.18s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-262645 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-262645 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (49.120595397s)
helpers_test.go:175: Cleaning up "offline-crio-262645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-262645
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-262645: (7.063396435s)
--- PASS: TestOffline (56.18s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-454747
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-454747: exit status 85 (65.300132ms)

                                                
                                                
-- stdout --
	* Profile "addons-454747" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-454747"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-454747
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-454747: exit status 85 (65.900652ms)

                                                
                                                
-- stdout --
	* Profile "addons-454747" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-454747"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (135.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-454747 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-454747 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m15.522635018s)
--- PASS: TestAddons/Setup (135.52s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-454747 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-454747 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-454747 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-454747 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [931f10c8-d125-48ec-a24c-4b0ad829febe] Pending
helpers_test.go:352: "busybox" [931f10c8-d125-48ec-a24c-4b0ad829febe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [931f10c8-d125-48ec-a24c-4b0ad829febe] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.00333234s
addons_test.go:694: (dbg) Run:  kubectl --context addons-454747 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-454747 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-454747 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-454747
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-454747: (18.251093441s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-454747
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-454747
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-454747
--- PASS: TestAddons/StoppedEnableDisable (18.55s)

                                                
                                    
x
+
TestCertOptions (27.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-759344 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-759344 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.012333495s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-759344 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-759344 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-759344 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-759344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-759344
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-759344: (2.466634934s)
--- PASS: TestCertOptions (27.20s)

                                                
                                    
x
+
TestCertExpiration (213.97s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-341243 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-341243 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.921956285s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-341243 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-341243 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.590635682s)
helpers_test.go:175: Cleaning up "cert-expiration-341243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-341243
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-341243: (2.459336595s)
--- PASS: TestCertExpiration (213.97s)

                                                
                                    
x
+
TestForceSystemdFlag (25.76s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-896620 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-896620 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.903969234s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-896620 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-896620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-896620
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-896620: (2.54101213s)
--- PASS: TestForceSystemdFlag (25.76s)

                                                
                                    
x
+
TestForceSystemdEnv (26.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-450177 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-450177 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.689675174s)
helpers_test.go:175: Cleaning up "force-systemd-env-450177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-450177
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-450177: (2.539406327s)
--- PASS: TestForceSystemdEnv (26.23s)

                                                
                                    
x
+
TestErrorSpam/setup (19.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-830102 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-830102 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-830102 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-830102 --driver=docker  --container-runtime=crio: (19.614565244s)
--- PASS: TestErrorSpam/setup (19.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (5.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 pause: exit status 80 (2.250598243s)

                                                
                                                
-- stdout --
	* Pausing node nospam-830102 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:14:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 pause: exit status 80 (2.059626068s)

                                                
                                                
-- stdout --
	* Pausing node nospam-830102 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:14:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 pause: exit status 80 (1.629863128s)

                                                
                                                
-- stdout --
	* Pausing node nospam-830102 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:14:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.04s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 unpause: exit status 80 (1.983518144s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-830102 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:14:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 unpause: exit status 80 (1.839612548s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-830102 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:14:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 unpause: exit status 80 (2.215973111s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-830102 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-15T09:14:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.04s)

                                                
                                    
x
+
TestErrorSpam/stop (2.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 stop: (2.375326862s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830102 --log_dir /tmp/nospam-830102 stop
--- PASS: TestErrorSpam/stop (2.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21895-355485/.minikube/files/etc/test/nested/copy/359063/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.5s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-838035 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-838035 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.503288755s)
--- PASS: TestFunctional/serial/StartWithProxy (37.50s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1115 09:15:25.223159  359063 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-838035 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-838035 --alsologtostderr -v=8: (6.258737241s)
functional_test.go:678: soft start took 6.259504765s for "functional-838035" cluster.
I1115 09:15:31.482312  359063 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-838035 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-838035 cache add registry.k8s.io/pause:3.3: (1.030139388s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-838035 /tmp/TestFunctionalserialCacheCmdcacheadd_local3018394737/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 cache add minikube-local-cache-test:functional-838035
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-838035 cache add minikube-local-cache-test:functional-838035: (1.49541913s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 cache delete minikube-local-cache-test:functional-838035
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-838035
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-838035 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (289.365532ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 kubectl -- --context functional-838035 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-838035 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.47s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-838035 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1115 09:15:46.558165  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:15:46.564711  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:15:46.576157  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:15:46.597603  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:15:46.639051  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:15:46.720559  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:15:46.882201  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:15:47.203930  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:15:47.846095  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:15:49.127732  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:15:51.690641  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:15:56.812194  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:16:07.054120  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-838035 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.471939603s)
functional_test.go:776: restart took 42.472116452s for "functional-838035" cluster.
I1115 09:16:21.217941  359063 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (42.47s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-838035 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-838035 logs: (1.251320804s)
--- PASS: TestFunctional/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 logs --file /tmp/TestFunctionalserialLogsFileCmd2842266254/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-838035 logs --file /tmp/TestFunctionalserialLogsFileCmd2842266254/001/logs.txt: (1.271464529s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-838035 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-838035
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-838035: exit status 115 (355.727345ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30272 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-838035 delete -f testdata/invalidsvc.yaml
E1115 09:16:27.535739  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-838035 config get cpus: exit status 14 (80.594361ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-838035 config get cpus: exit status 14 (75.355645ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-838035 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-838035 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 396953: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.88s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-838035 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-838035 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (172.240881ms)

                                                
                                                
-- stdout --
	* [functional-838035] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:16:55.802351  396007 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:16:55.802619  396007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:16:55.802631  396007 out.go:374] Setting ErrFile to fd 2...
	I1115 09:16:55.802636  396007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:16:55.802821  396007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:16:55.803256  396007 out.go:368] Setting JSON to false
	I1115 09:16:55.804350  396007 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3557,"bootTime":1763194659,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:16:55.804461  396007 start.go:143] virtualization: kvm guest
	I1115 09:16:55.806442  396007 out.go:179] * [functional-838035] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:16:55.807663  396007 notify.go:221] Checking for updates...
	I1115 09:16:55.807685  396007 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:16:55.808942  396007 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:16:55.810379  396007 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:16:55.811540  396007 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 09:16:55.812514  396007 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:16:55.813566  396007 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:16:55.814857  396007 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:16:55.815364  396007 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:16:55.842076  396007 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:16:55.842197  396007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:16:55.905607  396007 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-15 09:16:55.894798191 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:16:55.905760  396007 docker.go:319] overlay module found
	I1115 09:16:55.908110  396007 out.go:179] * Using the docker driver based on existing profile
	I1115 09:16:55.909218  396007 start.go:309] selected driver: docker
	I1115 09:16:55.909231  396007 start.go:930] validating driver "docker" against &{Name:functional-838035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-838035 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:16:55.909316  396007 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:16:55.910856  396007 out.go:203] 
	W1115 09:16:55.911811  396007 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1115 09:16:55.912928  396007 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-838035 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-838035 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-838035 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (179.289159ms)

                                                
                                                
-- stdout --
	* [functional-838035] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:16:56.213164  396341 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:16:56.213309  396341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:16:56.213318  396341 out.go:374] Setting ErrFile to fd 2...
	I1115 09:16:56.213322  396341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:16:56.213614  396341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:16:56.214156  396341 out.go:368] Setting JSON to false
	I1115 09:16:56.215139  396341 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3557,"bootTime":1763194659,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:16:56.215242  396341 start.go:143] virtualization: kvm guest
	I1115 09:16:56.217204  396341 out.go:179] * [functional-838035] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1115 09:16:56.218881  396341 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:16:56.218911  396341 notify.go:221] Checking for updates...
	I1115 09:16:56.221073  396341 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:16:56.222633  396341 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:16:56.223752  396341 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 09:16:56.224880  396341 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:16:56.225935  396341 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:16:56.227643  396341 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:16:56.228470  396341 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:16:56.257313  396341 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:16:56.257422  396341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:16:56.317342  396341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-15 09:16:56.308241226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:16:56.317476  396341 docker.go:319] overlay module found
	I1115 09:16:56.321449  396341 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1115 09:16:56.322469  396341 start.go:309] selected driver: docker
	I1115 09:16:56.322486  396341 start.go:930] validating driver "docker" against &{Name:functional-838035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-838035 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:16:56.322581  396341 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:16:56.323979  396341 out.go:203] 
	W1115 09:16:56.324980  396341 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1115 09:16:56.325929  396341 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [03716628-d749-4ca3-826b-8cf87fcef989] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003511099s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-838035 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-838035 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-838035 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-838035 apply -f testdata/storage-provisioner/pod.yaml
I1115 09:16:34.166319  359063 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d28f8e1a-3497-4189-9091-f01fe3f520ce] Pending
helpers_test.go:352: "sp-pod" [d28f8e1a-3497-4189-9091-f01fe3f520ce] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d28f8e1a-3497-4189-9091-f01fe3f520ce] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003683881s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-838035 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-838035 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-838035 apply -f testdata/storage-provisioner/pod.yaml
I1115 09:16:44.985054  359063 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [1c8e6571-65b5-48ec-9216-675c5e7ea46f] Pending
helpers_test.go:352: "sp-pod" [1c8e6571-65b5-48ec-9216-675c5e7ea46f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [1c8e6571-65b5-48ec-9216-675c5e7ea46f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003412431s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-838035 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.26s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh -n functional-838035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 cp functional-838035:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1156867735/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh -n functional-838035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh -n functional-838035 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (18.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-838035 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-6dvxh" [2a1b3d2d-1829-4b50-9c7c-3ffe5561c844] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2025/11/15 09:17:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "mysql-5bb876957f-6dvxh" [2a1b3d2d-1829-4b50-9c7c-3ffe5561c844] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.003936409s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-838035 exec mysql-5bb876957f-6dvxh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-838035 exec mysql-5bb876957f-6dvxh -- mysql -ppassword -e "show databases;": exit status 1 (97.315463ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1115 09:17:15.612258  359063 retry.go:31] will retry after 1.30187929s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-838035 exec mysql-5bb876957f-6dvxh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-838035 exec mysql-5bb876957f-6dvxh -- mysql -ppassword -e "show databases;": exit status 1 (91.087645ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1115 09:17:17.006136  359063 retry.go:31] will retry after 1.599440324s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-838035 exec mysql-5bb876957f-6dvxh -- mysql -ppassword -e "show databases;"
E1115 09:18:30.419757  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:20:46.554320  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:21:14.261698  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:25:46.554221  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (18.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/359063/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "sudo cat /etc/test/nested/copy/359063/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/359063.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "sudo cat /etc/ssl/certs/359063.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/359063.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "sudo cat /usr/share/ca-certificates/359063.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3590632.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "sudo cat /etc/ssl/certs/3590632.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3590632.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "sudo cat /usr/share/ca-certificates/3590632.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-838035 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-838035 ssh "sudo systemctl is-active docker": exit status 1 (272.224196ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-838035 ssh "sudo systemctl is-active containerd": exit status 1 (274.718265ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-838035 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-838035 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-838035 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-838035 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 391886: os: process already finished
helpers_test.go:525: unable to kill pid 391685: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-838035 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-838035 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [fe22cd49-6981-4e77-a431-0c2ab0e6b4e5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [fe22cd49-6981-4e77-a431-0c2ab0e6b4e5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003657837s
I1115 09:16:38.752521  359063 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-838035 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.169.245 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-838035 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-838035 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-838035 image ls --format short --alsologtostderr:
I1115 09:17:03.813052  398512 out.go:360] Setting OutFile to fd 1 ...
I1115 09:17:03.813308  398512 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:03.813319  398512 out.go:374] Setting ErrFile to fd 2...
I1115 09:17:03.813326  398512 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:03.813557  398512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
I1115 09:17:03.814163  398512 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:03.814284  398512 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:03.814729  398512 cli_runner.go:164] Run: docker container inspect functional-838035 --format={{.State.Status}}
I1115 09:17:03.833710  398512 ssh_runner.go:195] Run: systemctl --version
I1115 09:17:03.833777  398512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-838035
I1115 09:17:03.851606  398512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/functional-838035/id_rsa Username:docker}
I1115 09:17:03.944203  398512 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-838035 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/my-image                      │ functional-838035  │ 2962a29a7e9f7 │ 1.47MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ d261fd19cb632 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-838035 image ls --format table --alsologtostderr:
I1115 09:17:10.896458  399571 out.go:360] Setting OutFile to fd 1 ...
I1115 09:17:10.896766  399571 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:10.896778  399571 out.go:374] Setting ErrFile to fd 2...
I1115 09:17:10.896782  399571 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:10.896999  399571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
I1115 09:17:10.897576  399571 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:10.897693  399571 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:10.898482  399571 cli_runner.go:164] Run: docker container inspect functional-838035 --format={{.State.Status}}
I1115 09:17:10.917140  399571 ssh_runner.go:195] Run: systemctl --version
I1115 09:17:10.917220  399571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-838035
I1115 09:17:10.936017  399571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/functional-838035/id_rsa Username:docker}
I1115 09:17:11.029635  399571 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-838035 image ls --format json --alsologtostderr:
[{"id":"2962a29a7e9f7e1ca640cdfc0af56c06892a91ccd68c77b73236be1748502624","repoDigests":["localhost/my-image@sha256:45d61adff980c554d04f5f202c49d25fcc92a19a2ad54e8c6310157986a21bd3"],"repoTags":["localhost/my-image:functional-838035"],"size":"1468744"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"8c858e96d183af70e03c6b4aa6895f651e9e4a7d1aad09bf16d47cdecb46fa02","repoDigests":["docker.io/library/b506993ea1524b470a6d141fd94c6669bacdf2a30b08e59ce8e121d4b87b7918-tmp@sha256:8eb7282799266eb7c3a068ed32f066b73f6327410fd3f9c79f2035bdfa7dc24b"],"repoTags":[],"size":"1466132"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDige
sts":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c5
3d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry
.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c303
1ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.i
o/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d","repoDigests":["docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bf
a97a00f30938a0a3580563272ad","docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-838035 image ls --format json --alsologtostderr:
I1115 09:17:10.669512  399513 out.go:360] Setting OutFile to fd 1 ...
I1115 09:17:10.669678  399513 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:10.669693  399513 out.go:374] Setting ErrFile to fd 2...
I1115 09:17:10.669699  399513 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:10.670332  399513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
I1115 09:17:10.671061  399513 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:10.671181  399513 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:10.671650  399513 cli_runner.go:164] Run: docker container inspect functional-838035 --format={{.State.Status}}
I1115 09:17:10.690997  399513 ssh_runner.go:195] Run: systemctl --version
I1115 09:17:10.691052  399513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-838035
I1115 09:17:10.711215  399513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/functional-838035/id_rsa Username:docker}
I1115 09:17:10.804262  399513 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-838035 image ls --format yaml --alsologtostderr:
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d
repoDigests:
- docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad
- docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b
repoTags:
- docker.io/library/nginx:latest
size: "155489797"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-838035 image ls --format yaml --alsologtostderr:
I1115 09:17:04.037895  398567 out.go:360] Setting OutFile to fd 1 ...
I1115 09:17:04.038028  398567 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:04.038037  398567 out.go:374] Setting ErrFile to fd 2...
I1115 09:17:04.038041  398567 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:04.038235  398567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
I1115 09:17:04.038800  398567 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:04.038893  398567 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:04.039312  398567 cli_runner.go:164] Run: docker container inspect functional-838035 --format={{.State.Status}}
I1115 09:17:04.057737  398567 ssh_runner.go:195] Run: systemctl --version
I1115 09:17:04.057795  398567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-838035
I1115 09:17:04.075164  398567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/functional-838035/id_rsa Username:docker}
I1115 09:17:04.168159  398567 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-838035 ssh pgrep buildkitd: exit status 1 (277.863687ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image build -t localhost/my-image:functional-838035 testdata/build --alsologtostderr
E1115 09:17:08.497441  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-838035 image build -t localhost/my-image:functional-838035 testdata/build --alsologtostderr: (5.768041213s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-838035 image build -t localhost/my-image:functional-838035 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8c858e96d18
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-838035
--> 2962a29a7e9
Successfully tagged localhost/my-image:functional-838035
2962a29a7e9f7e1ca640cdfc0af56c06892a91ccd68c77b73236be1748502624
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-838035 image build -t localhost/my-image:functional-838035 testdata/build --alsologtostderr:
I1115 09:17:04.548252  398743 out.go:360] Setting OutFile to fd 1 ...
I1115 09:17:04.548545  398743 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:04.548557  398743 out.go:374] Setting ErrFile to fd 2...
I1115 09:17:04.548561  398743 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:17:04.548753  398743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
I1115 09:17:04.549370  398743 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:04.550005  398743 config.go:182] Loaded profile config "functional-838035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:17:04.550388  398743 cli_runner.go:164] Run: docker container inspect functional-838035 --format={{.State.Status}}
I1115 09:17:04.571630  398743 ssh_runner.go:195] Run: systemctl --version
I1115 09:17:04.571700  398743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-838035
I1115 09:17:04.594852  398743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/functional-838035/id_rsa Username:docker}
I1115 09:17:04.702621  398743 build_images.go:162] Building image from path: /tmp/build.273552382.tar
I1115 09:17:04.702696  398743 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1115 09:17:04.716143  398743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.273552382.tar
I1115 09:17:04.721937  398743 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.273552382.tar: stat -c "%s %y" /var/lib/minikube/build/build.273552382.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.273552382.tar': No such file or directory
I1115 09:17:04.721966  398743 ssh_runner.go:362] scp /tmp/build.273552382.tar --> /var/lib/minikube/build/build.273552382.tar (3072 bytes)
I1115 09:17:04.750551  398743 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.273552382
I1115 09:17:04.762780  398743 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.273552382 -xf /var/lib/minikube/build/build.273552382.tar
I1115 09:17:04.774841  398743 crio.go:315] Building image: /var/lib/minikube/build/build.273552382
I1115 09:17:04.774929  398743 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-838035 /var/lib/minikube/build/build.273552382 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1115 09:17:10.225338  398743 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-838035 /var/lib/minikube/build/build.273552382 --cgroup-manager=cgroupfs: (5.450376509s)
I1115 09:17:10.225432  398743 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.273552382
I1115 09:17:10.233682  398743 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.273552382.tar
I1115 09:17:10.241883  398743 build_images.go:218] Built localhost/my-image:functional-838035 from /tmp/build.273552382.tar
I1115 09:17:10.241919  398743 build_images.go:134] succeeded building to: functional-838035
I1115 09:17:10.241923  398743 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.702693911s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-838035
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image rm kicbase/echo-server:functional-838035 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "327.602881ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.165407ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "336.941427ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.513702ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-838035 /tmp/TestFunctionalparallelMountCmdany-port1230450293/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763198208624174427" to /tmp/TestFunctionalparallelMountCmdany-port1230450293/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763198208624174427" to /tmp/TestFunctionalparallelMountCmdany-port1230450293/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763198208624174427" to /tmp/TestFunctionalparallelMountCmdany-port1230450293/001/test-1763198208624174427
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-838035 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (291.562266ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:16:48.916095  359063 retry.go:31] will retry after 612.420675ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 15 09:16 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 15 09:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 15 09:16 test-1763198208624174427
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh cat /mount-9p/test-1763198208624174427
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-838035 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [1d0d7cdf-5291-4c56-b915-458fca73a35d] Pending
helpers_test.go:352: "busybox-mount" [1d0d7cdf-5291-4c56-b915-458fca73a35d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [1d0d7cdf-5291-4c56-b915-458fca73a35d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [1d0d7cdf-5291-4c56-b915-458fca73a35d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003057377s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-838035 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-838035 /tmp/TestFunctionalparallelMountCmdany-port1230450293/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.90s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-838035 /tmp/TestFunctionalparallelMountCmdspecific-port319790668/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-838035 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (296.962179ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:16:56.824918  359063 retry.go:31] will retry after 395.428697ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-838035 /tmp/TestFunctionalparallelMountCmdspecific-port319790668/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-838035 ssh "sudo umount -f /mount-9p": exit status 1 (274.617838ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-838035 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-838035 /tmp/TestFunctionalparallelMountCmdspecific-port319790668/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-838035 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4086067347/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-838035 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4086067347/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-838035 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4086067347/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-838035 ssh "findmnt -T" /mount1: exit status 1 (346.85209ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:16:58.652159  359063 retry.go:31] will retry after 273.881595ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-838035 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-838035 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4086067347/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-838035 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4086067347/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-838035 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4086067347/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-838035 service list: (1.706887421s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-838035 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-838035 service list -o json: (1.719823181s)
functional_test.go:1504: Took "1.719925964s" to run "out/minikube-linux-amd64 -p functional-838035 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-838035
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-838035
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-838035
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (147.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-577290 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m26.878174715s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (147.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-577290 kubectl -- rollout status deployment/busybox: (3.467688608s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- exec busybox-7b57f96db7-4h67r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- exec busybox-7b57f96db7-n4kml -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- exec busybox-7b57f96db7-wzz75 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- exec busybox-7b57f96db7-4h67r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- exec busybox-7b57f96db7-n4kml -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- exec busybox-7b57f96db7-wzz75 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- exec busybox-7b57f96db7-4h67r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- exec busybox-7b57f96db7-n4kml -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- exec busybox-7b57f96db7-wzz75 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- exec busybox-7b57f96db7-4h67r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- exec busybox-7b57f96db7-4h67r -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- exec busybox-7b57f96db7-n4kml -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- exec busybox-7b57f96db7-n4kml -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- exec busybox-7b57f96db7-wzz75 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 kubectl -- exec busybox-7b57f96db7-wzz75 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-577290 node add --alsologtostderr -v 5: (53.60323722s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-577290 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp testdata/cp-test.txt ha-577290:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile512031102/001/cp-test_ha-577290.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290:/home/docker/cp-test.txt ha-577290-m02:/home/docker/cp-test_ha-577290_ha-577290-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m02 "sudo cat /home/docker/cp-test_ha-577290_ha-577290-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290:/home/docker/cp-test.txt ha-577290-m03:/home/docker/cp-test_ha-577290_ha-577290-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m03 "sudo cat /home/docker/cp-test_ha-577290_ha-577290-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290:/home/docker/cp-test.txt ha-577290-m04:/home/docker/cp-test_ha-577290_ha-577290-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m04 "sudo cat /home/docker/cp-test_ha-577290_ha-577290-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp testdata/cp-test.txt ha-577290-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile512031102/001/cp-test_ha-577290-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290-m02:/home/docker/cp-test.txt ha-577290:/home/docker/cp-test_ha-577290-m02_ha-577290.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290 "sudo cat /home/docker/cp-test_ha-577290-m02_ha-577290.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290-m02:/home/docker/cp-test.txt ha-577290-m03:/home/docker/cp-test_ha-577290-m02_ha-577290-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m03 "sudo cat /home/docker/cp-test_ha-577290-m02_ha-577290-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290-m02:/home/docker/cp-test.txt ha-577290-m04:/home/docker/cp-test_ha-577290-m02_ha-577290-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m04 "sudo cat /home/docker/cp-test_ha-577290-m02_ha-577290-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp testdata/cp-test.txt ha-577290-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile512031102/001/cp-test_ha-577290-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290-m03:/home/docker/cp-test.txt ha-577290:/home/docker/cp-test_ha-577290-m03_ha-577290.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290 "sudo cat /home/docker/cp-test_ha-577290-m03_ha-577290.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290-m03:/home/docker/cp-test.txt ha-577290-m02:/home/docker/cp-test_ha-577290-m03_ha-577290-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m02 "sudo cat /home/docker/cp-test_ha-577290-m03_ha-577290-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290-m03:/home/docker/cp-test.txt ha-577290-m04:/home/docker/cp-test_ha-577290-m03_ha-577290-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m04 "sudo cat /home/docker/cp-test_ha-577290-m03_ha-577290-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp testdata/cp-test.txt ha-577290-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile512031102/001/cp-test_ha-577290-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290-m04:/home/docker/cp-test.txt ha-577290:/home/docker/cp-test_ha-577290-m04_ha-577290.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290 "sudo cat /home/docker/cp-test_ha-577290-m04_ha-577290.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290-m04:/home/docker/cp-test.txt ha-577290-m02:/home/docker/cp-test_ha-577290-m04_ha-577290-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m02 "sudo cat /home/docker/cp-test_ha-577290-m04_ha-577290-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 cp ha-577290-m04:/home/docker/cp-test.txt ha-577290-m03:/home/docker/cp-test_ha-577290-m04_ha-577290-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 ssh -n ha-577290-m03 "sudo cat /home/docker/cp-test_ha-577290-m04_ha-577290-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-577290 node stop m02 --alsologtostderr -v 5: (12.667173829s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-577290 status --alsologtostderr -v 5: exit status 7 (706.052711ms)

                                                
                                                
-- stdout --
	ha-577290
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-577290-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-577290-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-577290-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:30:38.858575  424257 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:30:38.858826  424257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:30:38.858834  424257 out.go:374] Setting ErrFile to fd 2...
	I1115 09:30:38.858838  424257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:30:38.859030  424257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:30:38.859209  424257 out.go:368] Setting JSON to false
	I1115 09:30:38.859245  424257 mustload.go:66] Loading cluster: ha-577290
	I1115 09:30:38.859337  424257 notify.go:221] Checking for updates...
	I1115 09:30:38.859666  424257 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:30:38.859682  424257 status.go:174] checking status of ha-577290 ...
	I1115 09:30:38.860133  424257 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:30:38.882344  424257 status.go:371] ha-577290 host status = "Running" (err=<nil>)
	I1115 09:30:38.882414  424257 host.go:66] Checking if "ha-577290" exists ...
	I1115 09:30:38.882785  424257 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290
	I1115 09:30:38.901214  424257 host.go:66] Checking if "ha-577290" exists ...
	I1115 09:30:38.901509  424257 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:30:38.901553  424257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290
	I1115 09:30:38.920242  424257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290/id_rsa Username:docker}
	I1115 09:30:39.013230  424257 ssh_runner.go:195] Run: systemctl --version
	I1115 09:30:39.020148  424257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:30:39.032557  424257 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:30:39.091376  424257 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-15 09:30:39.079383606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:30:39.091956  424257 kubeconfig.go:125] found "ha-577290" server: "https://192.168.49.254:8443"
	I1115 09:30:39.091988  424257 api_server.go:166] Checking apiserver status ...
	I1115 09:30:39.092037  424257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:30:39.103678  424257 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1227/cgroup
	W1115 09:30:39.112283  424257 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1227/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:30:39.112344  424257 ssh_runner.go:195] Run: ls
	I1115 09:30:39.116363  424257 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 09:30:39.122082  424257 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 09:30:39.122107  424257 status.go:463] ha-577290 apiserver status = Running (err=<nil>)
	I1115 09:30:39.122118  424257 status.go:176] ha-577290 status: &{Name:ha-577290 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:30:39.122146  424257 status.go:174] checking status of ha-577290-m02 ...
	I1115 09:30:39.122415  424257 cli_runner.go:164] Run: docker container inspect ha-577290-m02 --format={{.State.Status}}
	I1115 09:30:39.141538  424257 status.go:371] ha-577290-m02 host status = "Stopped" (err=<nil>)
	I1115 09:30:39.141561  424257 status.go:384] host is not running, skipping remaining checks
	I1115 09:30:39.141568  424257 status.go:176] ha-577290-m02 status: &{Name:ha-577290-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:30:39.141591  424257 status.go:174] checking status of ha-577290-m03 ...
	I1115 09:30:39.141888  424257 cli_runner.go:164] Run: docker container inspect ha-577290-m03 --format={{.State.Status}}
	I1115 09:30:39.161769  424257 status.go:371] ha-577290-m03 host status = "Running" (err=<nil>)
	I1115 09:30:39.161795  424257 host.go:66] Checking if "ha-577290-m03" exists ...
	I1115 09:30:39.162070  424257 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m03
	I1115 09:30:39.180850  424257 host.go:66] Checking if "ha-577290-m03" exists ...
	I1115 09:30:39.181120  424257 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:30:39.181159  424257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m03
	I1115 09:30:39.201003  424257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m03/id_rsa Username:docker}
	I1115 09:30:39.294325  424257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:30:39.308459  424257 kubeconfig.go:125] found "ha-577290" server: "https://192.168.49.254:8443"
	I1115 09:30:39.308488  424257 api_server.go:166] Checking apiserver status ...
	I1115 09:30:39.308524  424257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:30:39.319682  424257 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W1115 09:30:39.328248  424257 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:30:39.328307  424257 ssh_runner.go:195] Run: ls
	I1115 09:30:39.332104  424257 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 09:30:39.336271  424257 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 09:30:39.336294  424257 status.go:463] ha-577290-m03 apiserver status = Running (err=<nil>)
	I1115 09:30:39.336303  424257 status.go:176] ha-577290-m03 status: &{Name:ha-577290-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:30:39.336324  424257 status.go:174] checking status of ha-577290-m04 ...
	I1115 09:30:39.336599  424257 cli_runner.go:164] Run: docker container inspect ha-577290-m04 --format={{.State.Status}}
	I1115 09:30:39.355018  424257 status.go:371] ha-577290-m04 host status = "Running" (err=<nil>)
	I1115 09:30:39.355042  424257 host.go:66] Checking if "ha-577290-m04" exists ...
	I1115 09:30:39.355289  424257 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577290-m04
	I1115 09:30:39.373570  424257 host.go:66] Checking if "ha-577290-m04" exists ...
	I1115 09:30:39.373824  424257 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:30:39.373864  424257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577290-m04
	I1115 09:30:39.393935  424257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/ha-577290-m04/id_rsa Username:docker}
	I1115 09:30:39.487684  424257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:30:39.500511  424257 status.go:176] ha-577290-m04 status: &{Name:ha-577290-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 node start m02 --alsologtostderr -v 5
E1115 09:30:46.554165  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-577290 node start m02 --alsologtostderr -v 5: (8.164483508s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-577290 node delete m03 --alsologtostderr -v 5: (9.765023668s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (43.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-577290 stop --alsologtostderr -v 5: (42.955271299s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-577290 status --alsologtostderr -v 5: exit status 7 (125.737555ms)

                                                
                                                
-- stdout --
	ha-577290
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-577290-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-577290-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:39:00.151481  440619 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:39:00.151758  440619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:39:00.151767  440619 out.go:374] Setting ErrFile to fd 2...
	I1115 09:39:00.151772  440619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:39:00.151966  440619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:39:00.152132  440619 out.go:368] Setting JSON to false
	I1115 09:39:00.152168  440619 mustload.go:66] Loading cluster: ha-577290
	I1115 09:39:00.152295  440619 notify.go:221] Checking for updates...
	I1115 09:39:00.152607  440619 config.go:182] Loaded profile config "ha-577290": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:39:00.152624  440619 status.go:174] checking status of ha-577290 ...
	I1115 09:39:00.153085  440619 cli_runner.go:164] Run: docker container inspect ha-577290 --format={{.State.Status}}
	I1115 09:39:00.174949  440619 status.go:371] ha-577290 host status = "Stopped" (err=<nil>)
	I1115 09:39:00.174998  440619 status.go:384] host is not running, skipping remaining checks
	I1115 09:39:00.175007  440619 status.go:176] ha-577290 status: &{Name:ha-577290 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:39:00.175038  440619 status.go:174] checking status of ha-577290-m02 ...
	I1115 09:39:00.175303  440619 cli_runner.go:164] Run: docker container inspect ha-577290-m02 --format={{.State.Status}}
	I1115 09:39:00.194073  440619 status.go:371] ha-577290-m02 host status = "Stopped" (err=<nil>)
	I1115 09:39:00.194096  440619 status.go:384] host is not running, skipping remaining checks
	I1115 09:39:00.194104  440619 status.go:176] ha-577290-m02 status: &{Name:ha-577290-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:39:00.194145  440619 status.go:174] checking status of ha-577290-m04 ...
	I1115 09:39:00.194508  440619 cli_runner.go:164] Run: docker container inspect ha-577290-m04 --format={{.State.Status}}
	I1115 09:39:00.212996  440619 status.go:371] ha-577290-m04 host status = "Stopped" (err=<nil>)
	I1115 09:39:00.213018  440619 status.go:384] host is not running, skipping remaining checks
	I1115 09:39:00.213024  440619 status.go:176] ha-577290-m04 status: &{Name:ha-577290-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (43.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (72.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-577290 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m12.151804497s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (72.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 node add --control-plane --alsologtostderr -v 5
E1115 09:40:46.554832  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-577290 node add --control-plane --alsologtostderr -v 5: (34.208558489s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-577290 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.65s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-921712 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1115 09:41:27.820301  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-921712 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (42.644699697s)
--- PASS: TestJSONOutput/start/Command (42.65s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.22s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-921712 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-921712 --output=json --user=testUser: (6.219110988s)
--- PASS: TestJSONOutput/stop/Command (6.22s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-437914 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-437914 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (83.088823ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ed81f209-c1d8-4be0-a5e6-03f8351d43d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-437914] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"06d7b832-7906-4f75-8d5a-f74201a6ce39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21895"}}
	{"specversion":"1.0","id":"4b81a095-3588-49f2-949f-a497bd53f942","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"174b27a2-8df6-4f9a-8e81-7e52a3430696","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig"}}
	{"specversion":"1.0","id":"b89d6a0f-c9c7-41d3-8286-0e02660928c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube"}}
	{"specversion":"1.0","id":"1e8ddb4e-a097-4b94-b460-99429305e216","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2c6cb881-76d2-4a22-8e95-93e6b41e3d70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d2d59f3a-e929-46ed-aff5-f89f198f1aa5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-437914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-437914
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-816272 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-816272 --network=: (37.487979327s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-816272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-816272
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-816272: (2.136974757s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.64s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.21s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-249408 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-249408 --network=bridge: (22.173450646s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-249408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-249408
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-249408: (2.017213626s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.21s)

                                                
                                    
x
+
TestKicExistingNetwork (23.94s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1115 09:43:01.993150  359063 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1115 09:43:02.010521  359063 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1115 09:43:02.010605  359063 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1115 09:43:02.010643  359063 cli_runner.go:164] Run: docker network inspect existing-network
W1115 09:43:02.027837  359063 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1115 09:43:02.027880  359063 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1115 09:43:02.027896  359063 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1115 09:43:02.028140  359063 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1115 09:43:02.045414  359063 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a8fb985664d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:ab:70:dd:9f:65} reservation:<nil>}
I1115 09:43:02.045899  359063 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002121110}
I1115 09:43:02.045929  359063 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1115 09:43:02.045981  359063 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1115 09:43:02.091558  359063 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-584951 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-584951 --network=existing-network: (21.763087561s)
helpers_test.go:175: Cleaning up "existing-network-584951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-584951
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-584951: (2.044384463s)
I1115 09:43:25.918094  359063 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.94s)

                                                
                                    
x
+
TestKicCustomSubnet (24.54s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-248433 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-248433 --subnet=192.168.60.0/24: (22.387277237s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-248433 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-248433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-248433
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-248433: (2.132643297s)
--- PASS: TestKicCustomSubnet (24.54s)

                                                
                                    
x
+
TestKicStaticIP (28.87s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-351562 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-351562 --static-ip=192.168.200.200: (26.588984086s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-351562 ip
helpers_test.go:175: Cleaning up "static-ip-351562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-351562
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-351562: (2.126454167s)
--- PASS: TestKicStaticIP (28.87s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (48.68s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-942164 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-942164 --driver=docker  --container-runtime=crio: (20.599706017s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-945098 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-945098 --driver=docker  --container-runtime=crio: (22.156814845s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-942164
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-945098
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-945098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-945098
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-945098: (2.354721859s)
helpers_test.go:175: Cleaning up "first-942164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-942164
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-942164: (2.326551906s)
--- PASS: TestMinikubeProfile (48.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-228272 --memory=3072 --mount-string /tmp/TestMountStartserial3712862509/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-228272 --memory=3072 --mount-string /tmp/TestMountStartserial3712862509/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.883967003s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-228272 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-242602 --memory=3072 --mount-string /tmp/TestMountStartserial3712862509/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-242602 --memory=3072 --mount-string /tmp/TestMountStartserial3712862509/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.830213222s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-242602 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-228272 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-228272 --alsologtostderr -v=5: (1.684896305s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-242602 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-242602
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-242602: (1.26130924s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-242602
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-242602: (7.22134609s)
--- PASS: TestMountStart/serial/RestartStopped (8.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-242602 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-114173 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1115 09:45:46.557600  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:46:27.819781  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-114173 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m32.371177059s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114173 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114173 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-114173 -- rollout status deployment/busybox: (2.964547939s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114173 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114173 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114173 -- exec busybox-7b57f96db7-qsbnd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114173 -- exec busybox-7b57f96db7-vbcjk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114173 -- exec busybox-7b57f96db7-qsbnd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114173 -- exec busybox-7b57f96db7-vbcjk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114173 -- exec busybox-7b57f96db7-qsbnd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114173 -- exec busybox-7b57f96db7-vbcjk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114173 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114173 -- exec busybox-7b57f96db7-qsbnd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114173 -- exec busybox-7b57f96db7-qsbnd -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114173 -- exec busybox-7b57f96db7-vbcjk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-114173 -- exec busybox-7b57f96db7-vbcjk -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-114173 -v=5 --alsologtostderr
E1115 09:47:50.890845  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-114173 -v=5 --alsologtostderr: (53.591431714s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.23s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-114173 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 cp testdata/cp-test.txt multinode-114173:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 cp multinode-114173:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile382486341/001/cp-test_multinode-114173.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 cp multinode-114173:/home/docker/cp-test.txt multinode-114173-m02:/home/docker/cp-test_multinode-114173_multinode-114173-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173-m02 "sudo cat /home/docker/cp-test_multinode-114173_multinode-114173-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 cp multinode-114173:/home/docker/cp-test.txt multinode-114173-m03:/home/docker/cp-test_multinode-114173_multinode-114173-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173-m03 "sudo cat /home/docker/cp-test_multinode-114173_multinode-114173-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 cp testdata/cp-test.txt multinode-114173-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 cp multinode-114173-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile382486341/001/cp-test_multinode-114173-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 cp multinode-114173-m02:/home/docker/cp-test.txt multinode-114173:/home/docker/cp-test_multinode-114173-m02_multinode-114173.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173 "sudo cat /home/docker/cp-test_multinode-114173-m02_multinode-114173.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 cp multinode-114173-m02:/home/docker/cp-test.txt multinode-114173-m03:/home/docker/cp-test_multinode-114173-m02_multinode-114173-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173-m03 "sudo cat /home/docker/cp-test_multinode-114173-m02_multinode-114173-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 cp testdata/cp-test.txt multinode-114173-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 cp multinode-114173-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile382486341/001/cp-test_multinode-114173-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 cp multinode-114173-m03:/home/docker/cp-test.txt multinode-114173:/home/docker/cp-test_multinode-114173-m03_multinode-114173.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173 "sudo cat /home/docker/cp-test_multinode-114173-m03_multinode-114173.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 cp multinode-114173-m03:/home/docker/cp-test.txt multinode-114173-m02:/home/docker/cp-test_multinode-114173-m03_multinode-114173-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 ssh -n multinode-114173-m02 "sudo cat /home/docker/cp-test_multinode-114173-m03_multinode-114173-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-114173 node stop m03: (1.258239787s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-114173 status: exit status 7 (496.610023ms)

                                                
                                                
-- stdout --
	multinode-114173
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-114173-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-114173-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-114173 status --alsologtostderr: exit status 7 (500.572732ms)

                                                
                                                
-- stdout --
	multinode-114173
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-114173-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-114173-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:48:16.437477  502208 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:48:16.437642  502208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:48:16.437653  502208 out.go:374] Setting ErrFile to fd 2...
	I1115 09:48:16.437660  502208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:48:16.437889  502208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:48:16.438091  502208 out.go:368] Setting JSON to false
	I1115 09:48:16.438137  502208 mustload.go:66] Loading cluster: multinode-114173
	I1115 09:48:16.438251  502208 notify.go:221] Checking for updates...
	I1115 09:48:16.438607  502208 config.go:182] Loaded profile config "multinode-114173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:48:16.438960  502208 status.go:174] checking status of multinode-114173 ...
	I1115 09:48:16.440378  502208 cli_runner.go:164] Run: docker container inspect multinode-114173 --format={{.State.Status}}
	I1115 09:48:16.458787  502208 status.go:371] multinode-114173 host status = "Running" (err=<nil>)
	I1115 09:48:16.458814  502208 host.go:66] Checking if "multinode-114173" exists ...
	I1115 09:48:16.459132  502208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-114173
	I1115 09:48:16.477634  502208 host.go:66] Checking if "multinode-114173" exists ...
	I1115 09:48:16.477994  502208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:48:16.478048  502208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-114173
	I1115 09:48:16.497608  502208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33279 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/multinode-114173/id_rsa Username:docker}
	I1115 09:48:16.589080  502208 ssh_runner.go:195] Run: systemctl --version
	I1115 09:48:16.595310  502208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:48:16.608160  502208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:48:16.668433  502208 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-15 09:48:16.658259004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:48:16.668983  502208 kubeconfig.go:125] found "multinode-114173" server: "https://192.168.67.2:8443"
	I1115 09:48:16.669020  502208 api_server.go:166] Checking apiserver status ...
	I1115 09:48:16.669069  502208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:48:16.680681  502208 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1237/cgroup
	W1115 09:48:16.689321  502208 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1237/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:48:16.689367  502208 ssh_runner.go:195] Run: ls
	I1115 09:48:16.693507  502208 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1115 09:48:16.697715  502208 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1115 09:48:16.697753  502208 status.go:463] multinode-114173 apiserver status = Running (err=<nil>)
	I1115 09:48:16.697765  502208 status.go:176] multinode-114173 status: &{Name:multinode-114173 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:48:16.697783  502208 status.go:174] checking status of multinode-114173-m02 ...
	I1115 09:48:16.698031  502208 cli_runner.go:164] Run: docker container inspect multinode-114173-m02 --format={{.State.Status}}
	I1115 09:48:16.716710  502208 status.go:371] multinode-114173-m02 host status = "Running" (err=<nil>)
	I1115 09:48:16.716737  502208 host.go:66] Checking if "multinode-114173-m02" exists ...
	I1115 09:48:16.717006  502208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-114173-m02
	I1115 09:48:16.736039  502208 host.go:66] Checking if "multinode-114173-m02" exists ...
	I1115 09:48:16.736305  502208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:48:16.736345  502208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-114173-m02
	I1115 09:48:16.754929  502208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33284 SSHKeyPath:/home/jenkins/minikube-integration/21895-355485/.minikube/machines/multinode-114173-m02/id_rsa Username:docker}
	I1115 09:48:16.846068  502208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:48:16.858248  502208 status.go:176] multinode-114173-m02 status: &{Name:multinode-114173-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:48:16.858294  502208 status.go:174] checking status of multinode-114173-m03 ...
	I1115 09:48:16.858594  502208 cli_runner.go:164] Run: docker container inspect multinode-114173-m03 --format={{.State.Status}}
	I1115 09:48:16.876879  502208 status.go:371] multinode-114173-m03 host status = "Stopped" (err=<nil>)
	I1115 09:48:16.876901  502208 status.go:384] host is not running, skipping remaining checks
	I1115 09:48:16.876908  502208 status.go:176] multinode-114173-m03 status: &{Name:multinode-114173-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-114173 node start m03 -v=5 --alsologtostderr: (6.586807484s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-114173
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-114173
E1115 09:48:49.626602  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-114173: (31.351921318s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-114173 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-114173 --wait=true -v=5 --alsologtostderr: (50.725713122s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-114173
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-114173 node delete m03: (4.652831292s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-114173 stop: (28.413995198s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-114173 status: exit status 7 (103.459641ms)

                                                
                                                
-- stdout --
	multinode-114173
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-114173-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-114173 status --alsologtostderr: exit status 7 (99.545072ms)

                                                
                                                
-- stdout --
	multinode-114173
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-114173-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:50:20.214006  511970 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:50:20.214311  511970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:50:20.214322  511970 out.go:374] Setting ErrFile to fd 2...
	I1115 09:50:20.214327  511970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:50:20.214571  511970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:50:20.214766  511970 out.go:368] Setting JSON to false
	I1115 09:50:20.214806  511970 mustload.go:66] Loading cluster: multinode-114173
	I1115 09:50:20.214927  511970 notify.go:221] Checking for updates...
	I1115 09:50:20.215186  511970 config.go:182] Loaded profile config "multinode-114173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:50:20.215200  511970 status.go:174] checking status of multinode-114173 ...
	I1115 09:50:20.215645  511970 cli_runner.go:164] Run: docker container inspect multinode-114173 --format={{.State.Status}}
	I1115 09:50:20.234645  511970 status.go:371] multinode-114173 host status = "Stopped" (err=<nil>)
	I1115 09:50:20.234694  511970 status.go:384] host is not running, skipping remaining checks
	I1115 09:50:20.234712  511970 status.go:176] multinode-114173 status: &{Name:multinode-114173 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:50:20.234785  511970 status.go:174] checking status of multinode-114173-m02 ...
	I1115 09:50:20.235092  511970 cli_runner.go:164] Run: docker container inspect multinode-114173-m02 --format={{.State.Status}}
	I1115 09:50:20.253003  511970 status.go:371] multinode-114173-m02 host status = "Stopped" (err=<nil>)
	I1115 09:50:20.253024  511970 status.go:384] host is not running, skipping remaining checks
	I1115 09:50:20.253031  511970 status.go:176] multinode-114173-m02 status: &{Name:multinode-114173-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (26.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-114173 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1115 09:50:46.555070  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-114173 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (26.386258762s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-114173 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (26.99s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-114173
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-114173-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-114173-m02 --driver=docker  --container-runtime=crio: exit status 14 (79.729331ms)

                                                
                                                
-- stdout --
	* [multinode-114173-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-114173-m02' is duplicated with machine name 'multinode-114173-m02' in profile 'multinode-114173'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-114173-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-114173-m03 --driver=docker  --container-runtime=crio: (21.926691472s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-114173
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-114173: exit status 80 (301.47991ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-114173 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-114173-m03 already exists in multinode-114173-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-114173-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-114173-m03: (2.377265483s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.75s)

                                                
                                    
x
+
TestPreload (115.45s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-209070 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1115 09:51:27.819494  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-209070 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (46.526485533s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-209070 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-209070 image pull gcr.io/k8s-minikube/busybox: (2.368639503s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-209070
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-209070: (5.92891297s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-209070 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-209070 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (57.99634348s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-209070 image list
helpers_test.go:175: Cleaning up "test-preload-209070" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-209070
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-209070: (2.394533578s)
--- PASS: TestPreload (115.45s)

                                                
                                    
x
+
TestScheduledStopUnix (96.72s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-820931 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-820931 --memory=3072 --driver=docker  --container-runtime=crio: (21.48729908s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-820931 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 09:53:33.162638  528882 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:53:33.162744  528882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:53:33.162749  528882 out.go:374] Setting ErrFile to fd 2...
	I1115 09:53:33.162753  528882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:53:33.162937  528882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:53:33.163179  528882 out.go:368] Setting JSON to false
	I1115 09:53:33.163279  528882 mustload.go:66] Loading cluster: scheduled-stop-820931
	I1115 09:53:33.163650  528882 config.go:182] Loaded profile config "scheduled-stop-820931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:53:33.163719  528882 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/config.json ...
	I1115 09:53:33.163895  528882 mustload.go:66] Loading cluster: scheduled-stop-820931
	I1115 09:53:33.164024  528882 config.go:182] Loaded profile config "scheduled-stop-820931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-820931 -n scheduled-stop-820931
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-820931 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 09:53:33.547018  529033 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:53:33.547310  529033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:53:33.547321  529033 out.go:374] Setting ErrFile to fd 2...
	I1115 09:53:33.547327  529033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:53:33.547567  529033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:53:33.547836  529033 out.go:368] Setting JSON to false
	I1115 09:53:33.548054  529033 daemonize_unix.go:73] killing process 528917 as it is an old scheduled stop
	I1115 09:53:33.548168  529033 mustload.go:66] Loading cluster: scheduled-stop-820931
	I1115 09:53:33.548588  529033 config.go:182] Loaded profile config "scheduled-stop-820931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:53:33.548684  529033 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/config.json ...
	I1115 09:53:33.548887  529033 mustload.go:66] Loading cluster: scheduled-stop-820931
	I1115 09:53:33.549021  529033 config.go:182] Loaded profile config "scheduled-stop-820931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1115 09:53:33.554840  359063 retry.go:31] will retry after 121.196µs: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
I1115 09:53:33.556052  359063 retry.go:31] will retry after 197.526µs: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
I1115 09:53:33.557233  359063 retry.go:31] will retry after 160.718µs: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
I1115 09:53:33.558417  359063 retry.go:31] will retry after 318.688µs: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
I1115 09:53:33.559596  359063 retry.go:31] will retry after 533.65µs: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
I1115 09:53:33.560729  359063 retry.go:31] will retry after 988.219µs: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
I1115 09:53:33.561855  359063 retry.go:31] will retry after 885.749µs: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
I1115 09:53:33.562970  359063 retry.go:31] will retry after 1.539272ms: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
I1115 09:53:33.565159  359063 retry.go:31] will retry after 3.583124ms: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
I1115 09:53:33.569353  359063 retry.go:31] will retry after 5.203259ms: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
I1115 09:53:33.575553  359063 retry.go:31] will retry after 8.431035ms: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
I1115 09:53:33.584850  359063 retry.go:31] will retry after 11.943381ms: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
I1115 09:53:33.597101  359063 retry.go:31] will retry after 18.751514ms: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
I1115 09:53:33.616352  359063 retry.go:31] will retry after 11.327712ms: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
I1115 09:53:33.628630  359063 retry.go:31] will retry after 27.745241ms: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
I1115 09:53:33.657612  359063 retry.go:31] will retry after 60.383736ms: open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-820931 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-820931 -n scheduled-stop-820931
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-820931
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-820931 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 09:53:59.470371  529686 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:53:59.470504  529686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:53:59.470514  529686 out.go:374] Setting ErrFile to fd 2...
	I1115 09:53:59.470518  529686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:53:59.470727  529686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:53:59.470952  529686 out.go:368] Setting JSON to false
	I1115 09:53:59.471034  529686 mustload.go:66] Loading cluster: scheduled-stop-820931
	I1115 09:53:59.471379  529686 config.go:182] Loaded profile config "scheduled-stop-820931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:53:59.471460  529686 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/scheduled-stop-820931/config.json ...
	I1115 09:53:59.471655  529686 mustload.go:66] Loading cluster: scheduled-stop-820931
	I1115 09:53:59.471748  529686 config.go:182] Loaded profile config "scheduled-stop-820931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-820931
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-820931: exit status 7 (84.683386ms)

                                                
                                                
-- stdout --
	scheduled-stop-820931
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-820931 -n scheduled-stop-820931
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-820931 -n scheduled-stop-820931: exit status 7 (80.537694ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-820931" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-820931
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-820931: (3.684782437s)
--- PASS: TestScheduledStopUnix (96.72s)

                                                
                                    
x
+
TestInsufficientStorage (12.36s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-066114 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-066114 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.881076327s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0f5161fb-03d4-4995-b42c-56967efe3dae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-066114] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae138cbd-9f2f-45fe-9428-25b921000352","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21895"}}
	{"specversion":"1.0","id":"39b286a1-b156-4875-b075-140037d24b2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"30f5e2fe-d1e2-43d7-bf8f-906558fbf9b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig"}}
	{"specversion":"1.0","id":"236d91f3-d7c3-41a6-87b0-3969af536375","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube"}}
	{"specversion":"1.0","id":"812b5949-8ff6-412b-86f1-b8cc317e1134","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fbd48545-1280-4f0e-887a-ba963743e503","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"122705cb-63c8-4e82-8587-4c2d688683a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"24300afa-7a97-461b-8063-bdb0ec5cd210","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bbbae126-8f06-4081-baf9-78127352e349","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"92c4334d-d94d-4025-9388-1324544c4fa5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1b1738ee-9643-4afb-9bb2-e2d16e5aa78f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-066114\" primary control-plane node in \"insufficient-storage-066114\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"484fb223-a66f-444d-b9ea-1e933c05983d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1761985721-21837 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"250a80a4-681f-4d89-bf20-bf78fa0697c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"cec1324e-0fb6-4b27-a1ff-0b887800d9e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-066114 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-066114 --output=json --layout=cluster: exit status 7 (294.028963ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-066114","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-066114","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1115 09:54:58.499635  532209 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-066114" does not appear in /home/jenkins/minikube-integration/21895-355485/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-066114 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-066114 --output=json --layout=cluster: exit status 7 (287.112777ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-066114","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-066114","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1115 09:54:58.787294  532320 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-066114" does not appear in /home/jenkins/minikube-integration/21895-355485/kubeconfig
	E1115 09:54:58.797514  532320 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/insufficient-storage-066114/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-066114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-066114
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-066114: (1.894667259s)
--- PASS: TestInsufficientStorage (12.36s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (56.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.342503152 start -p running-upgrade-622883 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.342503152 start -p running-upgrade-622883 --memory=3072 --vm-driver=docker  --container-runtime=crio: (28.885440314s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-622883 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-622883 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.889854264s)
helpers_test.go:175: Cleaning up "running-upgrade-622883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-622883
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-622883: (2.515523834s)
--- PASS: TestRunningBinaryUpgrade (56.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (314.95s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.45150035s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-405833
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-405833: (4.320356555s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-405833 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-405833 status --format={{.Host}}: exit status 7 (101.058634ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1115 09:55:46.554815  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/addons-454747/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.736685722s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-405833 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (104.403447ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-405833] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-405833
	    minikube start -p kubernetes-upgrade-405833 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4058332 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-405833 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-405833 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.70964634s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-405833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-405833
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-405833: (2.460425393s)
--- PASS: TestKubernetesUpgrade (314.95s)

                                                
                                    
x
+
TestMissingContainerUpgrade (118.65s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3393596422 start -p missing-upgrade-213922 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3393596422 start -p missing-upgrade-213922 --memory=3072 --driver=docker  --container-runtime=crio: (1m10.574971071s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-213922
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-213922: (1.787482186s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-213922
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-213922 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-213922 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.735857914s)
helpers_test.go:175: Cleaning up "missing-upgrade-213922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-213922
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-213922: (4.868483137s)
--- PASS: TestMissingContainerUpgrade (118.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (89.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3570579106 start -p stopped-upgrade-505385 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3570579106 start -p stopped-upgrade-505385 --memory=3072 --vm-driver=docker  --container-runtime=crio: (1m12.007092487s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3570579106 -p stopped-upgrade-505385 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3570579106 -p stopped-upgrade-505385 stop: (2.343461052s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-505385 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1115 09:56:27.820302  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-505385 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.954095383s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (89.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-505385
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestPause/serial/Start (46.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-717282 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-717282 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (46.037810992s)
--- PASS: TestPause/serial/Start (46.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-941483 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-941483 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (112.709986ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-941483] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-941483 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-941483 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.137671759s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-941483 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-034018 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-034018 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (182.545175ms)

                                                
                                                
-- stdout --
	* [false-034018] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21895
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:57:03.410038  561522 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:57:03.410209  561522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:57:03.410221  561522 out.go:374] Setting ErrFile to fd 2...
	I1115 09:57:03.410227  561522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:57:03.410562  561522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-355485/.minikube/bin
	I1115 09:57:03.411186  561522 out.go:368] Setting JSON to false
	I1115 09:57:03.412483  561522 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5964,"bootTime":1763194659,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:57:03.412588  561522 start.go:143] virtualization: kvm guest
	I1115 09:57:03.414415  561522 out.go:179] * [false-034018] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:57:03.416133  561522 out.go:179]   - MINIKUBE_LOCATION=21895
	I1115 09:57:03.416166  561522 notify.go:221] Checking for updates...
	I1115 09:57:03.419063  561522 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:57:03.420430  561522 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21895-355485/kubeconfig
	I1115 09:57:03.421685  561522 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-355485/.minikube
	I1115 09:57:03.425641  561522 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:57:03.427101  561522 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:57:03.429102  561522 config.go:182] Loaded profile config "NoKubernetes-941483": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:57:03.429299  561522 config.go:182] Loaded profile config "kubernetes-upgrade-405833": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:57:03.429514  561522 config.go:182] Loaded profile config "pause-717282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1115 09:57:03.429646  561522 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:57:03.457711  561522 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:57:03.457806  561522 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:57:03.517474  561522 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-15 09:57:03.507038967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:57:03.517614  561522 docker.go:319] overlay module found
	I1115 09:57:03.519511  561522 out.go:179] * Using the docker driver based on user configuration
	I1115 09:57:03.520844  561522 start.go:309] selected driver: docker
	I1115 09:57:03.520859  561522 start.go:930] validating driver "docker" against <nil>
	I1115 09:57:03.520871  561522 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:57:03.522456  561522 out.go:203] 
	W1115 09:57:03.523662  561522 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1115 09:57:03.524846  561522 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-034018 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-034018

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-034018

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-034018

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-034018

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-034018

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-034018

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-034018

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-034018

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-034018

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-034018

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-034018

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-034018" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-034018" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 09:55:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-405833
contexts:
- context:
cluster: kubernetes-upgrade-405833
user: kubernetes-upgrade-405833
name: kubernetes-upgrade-405833
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-405833
user:
client-certificate: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/kubernetes-upgrade-405833/client.crt
client-key: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/kubernetes-upgrade-405833/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-034018

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034018"

                                                
                                                
----------------------- debugLogs end: false-034018 [took: 3.535994945s] --------------------------------
helpers_test.go:175: Cleaning up "false-034018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-034018
--- PASS: TestNetworkPlugins/group/false (3.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-941483 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-941483 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.722616045s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-941483 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-941483 status -o json: exit status 2 (333.967328ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-941483","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-941483
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-941483: (2.093676179s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.15s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-717282 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-717282 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.930646074s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-941483 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-941483 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.161890166s)
--- PASS: TestNoKubernetes/serial/Start (4.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21895-355485/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-941483 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-941483 "sudo systemctl is-active --quiet service kubelet": exit status 1 (289.651526ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (15.421555924s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-941483
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-941483: (2.141544748s)
--- PASS: TestNoKubernetes/serial/Stop (2.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-941483 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-941483 --driver=docker  --container-runtime=crio: (7.248275007s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-941483 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-941483 "sudo systemctl is-active --quiet service kubelet": exit status 1 (284.6147ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (48.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-335655 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-335655 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (48.640821428s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (48.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (53.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.274439752s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (53.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-335655 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0f8f9c9d-462a-4efa-a9dc-07df32af16c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0f8f9c9d-462a-4efa-a9dc-07df32af16c9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003167169s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-335655 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-335655 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-335655 --alsologtostderr -v=3: (16.068109003s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-559401 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4972a866-c48a-427f-8837-dd6d8889a805] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4972a866-c48a-427f-8837-dd6d8889a805] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003886267s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-559401 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-559401 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-559401 --alsologtostderr -v=3: (16.291691546s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335655 -n old-k8s-version-335655
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335655 -n old-k8s-version-335655: exit status 7 (88.534777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-335655 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-335655 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-335655 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.111530543s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335655 -n old-k8s-version-335655
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-559401 -n no-preload-559401
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-559401 -n no-preload-559401: exit status 7 (89.623449ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-559401 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (43.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-559401 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.56295863s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-559401 -n no-preload-559401
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (43.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-430513 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-430513 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.713007582s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5wmkv" [de87fac4-aa42-4aaf-bb60-25d5a7066747] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00311178s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5wmkv" [de87fac4-aa42-4aaf-bb60-25d5a7066747] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00650818s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-335655 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-335655 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nhbwb" [b2804b3e-3418-4b75-93a0-a568ca6de288] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004143532s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nhbwb" [b2804b3e-3418-4b75-93a0-a568ca6de288] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004113609s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-559401 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-679865 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-679865 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.003992095s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-559401 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-783113 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-783113 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (31.359578334s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-430513 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e3cc26c8-a3a0-4086-9b89-4cc9281a80ab] Pending
helpers_test.go:352: "busybox" [e3cc26c8-a3a0-4086-9b89-4cc9281a80ab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e3cc26c8-a3a0-4086-9b89-4cc9281a80ab] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.007959961s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-430513 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-430513 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-430513 --alsologtostderr -v=3: (16.867367036s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (40.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (40.271335524s)
--- PASS: TestNetworkPlugins/group/auto/Start (40.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-679865 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cac86649-71f6-4c8c-b775-c310a8db63bc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cac86649-71f6-4c8c-b775-c310a8db63bc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004340442s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-679865 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-430513 -n embed-certs-430513
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-430513 -n embed-certs-430513: exit status 7 (89.989229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-430513 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-430513 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-430513 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.280427454s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-430513 -n embed-certs-430513
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-783113 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-783113 --alsologtostderr -v=3: (8.021102808s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (19.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-679865 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-679865 --alsologtostderr -v=3: (19.046328228s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (19.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-783113 -n newest-cni-783113
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-783113 -n newest-cni-783113: exit status 7 (87.243587ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-783113 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-783113 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-783113 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (12.592804956s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-783113 -n newest-cni-783113
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-783113 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-034018 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-034018 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rp2nz" [713707f8-0ed4-4e52-854a-31b374b7f2a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rp2nz" [713707f8-0ed4-4e52-854a-31b374b7f2a3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004522477s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-679865 -n default-k8s-diff-port-679865
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-679865 -n default-k8s-diff-port-679865: exit status 7 (88.853347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-679865 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-679865 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-679865 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.285954891s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-679865 -n default-k8s-diff-port-679865
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (44.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (44.170338897s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (44.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-034018 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-034018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-034018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9dvs6" [0969e69a-a9ba-4971-9bdb-640845c9f45d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003570519s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9dvs6" [0969e69a-a9ba-4971-9bdb-640845c9f45d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.075035107s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-430513 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (53.001407403s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-430513 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (55.293384807s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-24grr" [a1d81f82-7521-4a40-81a2-df544fe4a3a6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003470461s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-w8lq4" [889f4ca2-eb36-4c3a-b40f-058c8814a6af] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004147075s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-24grr" [a1d81f82-7521-4a40-81a2-df544fe4a3a6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004519429s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-679865 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-034018 "pgrep -a kubelet"
I1115 10:02:49.176459  359063 config.go:182] Loaded profile config "kindnet-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-034018 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2txpp" [21e6fc96-d996-4459-815a-b029ebcd28d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2txpp" [21e6fc96-d996-4459-815a-b029ebcd28d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004422737s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-679865 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-034018 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-034018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-034018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (61.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m1.660047563s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (61.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-xkxbl" [bc2efc20-430c-4a2d-9e78-e40e8e22bf37] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-xkxbl" [bc2efc20-430c-4a2d-9e78-e40e8e22bf37] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004018939s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (48.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (48.435188653s)
--- PASS: TestNetworkPlugins/group/flannel/Start (48.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-034018 "pgrep -a kubelet"
I1115 10:03:21.859645  359063 config.go:182] Loaded profile config "calico-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-034018 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rpgmt" [8e803332-095b-4adc-9135-04fce8859937] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rpgmt" [8e803332-095b-4adc-9135-04fce8859937] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005911829s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-034018 "pgrep -a kubelet"
I1115 10:03:30.800534  359063 config.go:182] Loaded profile config "custom-flannel-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-034018 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xhb5j" [643e0159-d41f-43ec-b1e4-c065e9d7e421] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xhb5j" [643e0159-d41f-43ec-b1e4-c065e9d7e421] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004461666s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-034018 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-034018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-034018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-034018 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-034018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-034018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (32.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-034018 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (32.956549952s)
--- PASS: TestNetworkPlugins/group/bridge/Start (32.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-034018 "pgrep -a kubelet"
I1115 10:04:05.963059  359063 config.go:182] Loaded profile config "enable-default-cni-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-034018 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-s95gk" [47eca2dd-2c44-4792-99a9-a04c7f4b66ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1115 10:04:06.465767  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:04:06.472117  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:04:06.483692  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:04:06.505201  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:04:06.546668  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:04:06.628695  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:04:06.791052  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:04:07.112933  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-s95gk" [47eca2dd-2c44-4792-99a9-a04c7f4b66ba] Running
E1115 10:04:11.598408  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003934037s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-l84ns" [d7bea1ef-d0c3-4912-9847-ce8f4f54f561] Running
E1115 10:04:07.755194  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:04:09.036582  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003866519s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-034018 "pgrep -a kubelet"
I1115 10:04:13.792533  359063 config.go:182] Loaded profile config "flannel-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-034018 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bt8dr" [9984768b-e5e1-4258-8e2a-adb7bf932c65] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bt8dr" [9984768b-e5e1-4258-8e2a-adb7bf932c65] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003596776s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-034018 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-034018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-034018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-034018 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-034018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-034018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1115 10:04:23.216609  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:04:23.223015  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:04:23.235189  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:04:23.257417  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-034018 "pgrep -a kubelet"
E1115 10:04:26.962385  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/old-k8s-version-335655/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1115 10:04:27.131750  359063 config.go:182] Loaded profile config "bridge-034018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-034018 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bv982" [9c67eb9e-5bdc-4192-b250-a18474573f9e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1115 10:04:28.350590  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/no-preload-559401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-bv982" [9c67eb9e-5bdc-4192-b250-a18474573f9e] Running
E1115 10:04:30.892586  359063 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/functional-838035/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003345375s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-034018 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-034018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-034018 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-553319" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-553319
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-034018 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-034018

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-034018

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-034018

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-034018

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-034018

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-034018

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-034018

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-034018

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-034018

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-034018

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-034018

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-034018" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-034018" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 09:55:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-405833
contexts:
- context:
cluster: kubernetes-upgrade-405833
user: kubernetes-upgrade-405833
name: kubernetes-upgrade-405833
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-405833
user:
client-certificate: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/kubernetes-upgrade-405833/client.crt
client-key: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/kubernetes-upgrade-405833/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-034018

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034018"

                                                
                                                
----------------------- debugLogs end: kubenet-034018 [took: 3.803163718s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-034018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-034018
--- SKIP: TestNetworkPlugins/group/kubenet (4.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-034018 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-034018" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-034018" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-034018" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 09:55:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-405833
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21895-355485/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 09:57:08 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-717282
contexts:
- context:
cluster: kubernetes-upgrade-405833
user: kubernetes-upgrade-405833
name: kubernetes-upgrade-405833
- context:
cluster: pause-717282
extensions:
- extension:
last-update: Sat, 15 Nov 2025 09:57:08 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-717282
name: pause-717282
current-context: pause-717282
kind: Config
users:
- name: kubernetes-upgrade-405833
user:
client-certificate: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/kubernetes-upgrade-405833/client.crt
client-key: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/kubernetes-upgrade-405833/client.key
- name: pause-717282
user:
client-certificate: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/client.crt
client-key: /home/jenkins/minikube-integration/21895-355485/.minikube/profiles/pause-717282/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-034018

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-034018" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034018"

                                                
                                                
----------------------- debugLogs end: cilium-034018 [took: 3.999555228s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-034018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-034018
--- SKIP: TestNetworkPlugins/group/cilium (4.18s)

                                                
                                    
Copied to clipboard